Low-Rank Matrix Factorization With Adaptive Graph Regularizer.
Lu, Gui-Fu; Wang, Yong; Zou, Jian
2016-05-01
In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.
Low-rank matrix approximation with manifold regularization.
Zhang, Zhenyue; Zhao, Keke
2013-07-01
This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.
Video deraining and desnowing using temporal correlation and low-rank matrix completion.
Kim, Jin-Hwan; Sim, Jae-Young; Kim, Chang-Su
2015-09-01
A novel algorithm to remove rain or snow streaks from a video sequence using temporal correlation and low-rank matrix completion is proposed in this paper. Based on the observation that rain streaks are too small and move too fast to affect the optical flow estimation between consecutive frames, we obtain an initial rain map by subtracting temporally warped frames from a current frame. Then, we decompose the initial rain map into basis vectors based on the sparse representation, and classify those basis vectors into rain streak ones and outliers with a support vector machine. We then refine the rain map by excluding the outliers. Finally, we remove the detected rain streaks by employing a low-rank matrix completion technique. Furthermore, we extend the proposed algorithm to stereo video deraining. Experimental results demonstrate that the proposed algorithm detects and removes rain or snow streaks efficiently, outperforming conventional algorithms.
Lyu, Jingyuan; Nakarmi, Ukash; Zhang, Chaoyi; Ying, Leslie
2016-05-01
This paper presents a new approach to highly accelerated dynamic parallel MRI using low rank matrix completion, partial separability (PS) model. In data acquisition, k-space data is moderately randomly undersampled at the center kspace navigator locations, but highly undersampled at the outer k-space for each temporal frame. In reconstruction, the navigator data is reconstructed from undersampled data using structured low-rank matrix completion. After all the unacquired navigator data is estimated, the partial separable model is used to obtain partial k-t data. Then the parallel imaging method is used to acquire the entire dynamic image series from highly undersampled data. The proposed method has shown to achieve high quality reconstructions with reduction factors up to 31, and temporal resolution of 29ms, when the conventional PS method fails.
A Class of Weighted Low Rank Approximation of the Positive Semidefinite Hankel Matrix
Directory of Open Access Journals (Sweden)
Jianchao Bai
2015-01-01
Full Text Available We consider the weighted low rank approximation of the positive semidefinite Hankel matrix problem arising in signal processing. By using the Vandermonde representation, we firstly transform the problem into an unconstrained optimization problem and then use the nonlinear conjugate gradient algorithm with the Armijo line search to solve the equivalent unconstrained optimization problem. Numerical examples illustrate that the new method is feasible and effective.
Directory of Open Access Journals (Sweden)
Hugo Lara
2014-12-01
Full Text Available The matrix completion problem (MC has been approximated by using the nuclear norm relaxation. Some algorithms based on this strategy require the computationally expensive singular value decomposition (SVD at each iteration. One way to avoid SVD calculations is to use alternating methods, which pursue the completion through matrix factorization with a low rank condition. In this work an augmented Lagrangean-type alternating algorithm is proposed. The new algorithm uses duality information to define the iterations, in contrast to the solely primal LMaFit algorithm, which employs a Successive Over Relaxation scheme. The convergence result is studied. Some numerical experiments are given to compare numerical performance of both proposals.
Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction
Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing
2018-02-01
Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.
Two-Step Proximal Gradient Algorithm for Low-Rank Matrix Completion
Directory of Open Access Journals (Sweden)
Qiuyu Wang
2016-06-01
Full Text Available In this paper, we propose a two-step proximal gradient algorithm to solve nuclear norm regularized least squares for the purpose of recovering low-rank data matrix from sampling of its entries. Each iteration generated by the proposed algorithm is a combination of the latest three points, namely, the previous point, the current iterate, and its proximal gradient point. This algorithm preserves the computational simplicity of classical proximal gradient algorithm where a singular value decomposition in proximal operator is involved. Global convergence is followed directly in the literature. Numerical results are reported to show the efficiency of the algorithm.
On low rank classical groups in string theory, gauge theory and matrix models
International Nuclear Information System (INIS)
Intriligator, Ken; Kraus, Per; Ryzhov, Anton V.; Shigemori, Masaki; Vafa, Cumrun
2004-01-01
We consider N=1 supersymmetric U(N), SO(N), and Sp(N) gauge theories, with two-index tensor matter and added tree-level superpotential, for general breaking patterns of the gauge group. By considering the string theory realization and geometric transitions, we clarify when glueball superfields should be included and extremized, or rather set to zero; this issue arises for unbroken group factors of low rank. The string theory results, which are equivalent to those of the matrix model, refer to a particular UV completion of the gauge theory, which could differ from conventional gauge theory results by residual instanton effects. Often, however, these effects exhibit miraculous cancellations, and the string theory or matrix model results end up agreeing with standard gauge theory. In particular, these string theory considerations explain and remove some apparent discrepancies between gauge theories and matrix models in the literature
Mao, Shasha; Xiong, Lin; Jiao, Licheng; Feng, Tian; Yeung, Sai-Kit
2017-05-01
Riemannian optimization has been widely used to deal with the fixed low-rank matrix completion problem, and Riemannian metric is a crucial factor of obtaining the search direction in Riemannian optimization. This paper proposes a new Riemannian metric via simultaneously considering the Riemannian geometry structure and the scaling information, which is smoothly varying and invariant along the equivalence class. The proposed metric can make a tradeoff between the Riemannian geometry structure and the scaling information effectively. Essentially, it can be viewed as a generalization of some existing metrics. Based on the proposed Riemanian metric, we also design a Riemannian nonlinear conjugate gradient algorithm, which can efficiently solve the fixed low-rank matrix completion problem. By experimenting on the fixed low-rank matrix completion, collaborative filtering, and image and video recovery, it illustrates that the proposed method is superior to the state-of-the-art methods on the convergence efficiency and the numerical performance.
DEFF Research Database (Denmark)
Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano
2014-01-01
We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...
A multi-platform evaluation of the randomized CX low-rank matrix factorization in Spark
Energy Technology Data Exchange (ETDEWEB)
Gittens, Alex; Kottalam, Jey; Yang, Jiyan; Ringenburg, Michael, F.; Chhugani, Jatin; Racah, Evan; Singh, Mohitdeep; Yao, Yushu; Fischer, Curt; Ruebel, Oliver; Bowen, Benjamin; Lewis, Norman, G.; Mahoney, Michael, W.; Krishnamurthy, Venkat; Prabhat, Mr
2017-07-27
We investigate the performance and scalability of the randomized CX low-rank matrix factorization and demonstrate its applicability through the analysis of a 1TB mass spectrometry imaging (MSI) dataset, using Apache Spark on an Amazon EC2 cluster, a Cray XC40 system, and an experimental Cray cluster. We implemented this factorization both as a parallelized C implementation with hand-tuned optimizations and in Scala using the Apache Spark high-level cluster computing framework. We obtained consistent performance across the three platforms: using Spark we were able to process the 1TB size dataset in under 30 minutes with 960 cores on all systems, with the fastest times obtained on the experimental Cray cluster. In comparison, the C implementation was 21X faster on the Amazon EC2 system, due to careful cache optimizations, bandwidth-friendly access of matrices and vector computation using SIMD units. We report these results and their implications on the hardware and software issues arising in supporting data-centric workloads in parallel and distributed environments.
On predicting student performance using low-rank matrix factorization techniques
DEFF Research Database (Denmark)
Lorenzen, Stephan Sloth; Pham, Dang Ninh; Alstrup, Stephen
2017-01-01
Predicting the score of a student is one of the important problems in educational data mining. The scores given by an individual student reflect how a student understands and applies the knowledge conveyed in class. A reliable performance prediction enables teachers to identify weak students...... that require remedial support, generate adaptive hints, and improve the learning of students. This work focuses on predicting the score of students in the quiz system of the Clio Online learning platform, the largest Danish supplier of online learning materials, covering 90% of Danish elementary schools...... and the current version of the data set is very sparse, the very low-rank approximation can capture enough information. This means that the simple baseline approach achieves similar performance compared to other advanced methods. In future work, we will restrict the quiz data set, e.g. only including quizzes...
Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation
Yokota, Rio
2018-01-03
There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.
A Generalized Robust Minimization Framework for Low-Rank Matrix Recovery
Directory of Open Access Journals (Sweden)
Wen-Ze Shao
2014-01-01
Full Text Available This paper considers the problem of recovering low-rank matrices which are heavily corrupted by outliers or large errors. To improve the robustness of existing recovery methods, the problem is solved by formulating it as a generalized nonsmooth nonconvex minimization functional via exploiting the Schatten p-norm (0 < p ≤1 and Lq(0 < q ≤1 seminorm. Two numerical algorithms are provided based on the augmented Lagrange multiplier (ALM and accelerated proximal gradient (APG methods as well as efficient root-finder strategies. Experimental results demonstrate that the proposed generalized approach is more inclusive and effective compared with state-of-the-art methods, either convex or nonconvex.
Fast Multipole Method as a Matrix-Free Hierarchical Low-Rank Approximation
Yokota, Rio; Ibeid, Huda; Keyes, David E.
2018-01-01
There has been a large increase in the amount of work on hierarchical low-rank approximation methods, where the interest is shared by multiple communities that previously did not intersect. This objective of this article is two-fold; to provide a thorough review of the recent advancements in this field from both analytical and algebraic perspectives, and to present a comparative benchmark of two highly optimized implementations of contrasting methods for some simple yet representative test cases. The first half of this paper has the form of a survey paper, to achieve the former objective. We categorize the recent advances in this field from the perspective of compute-memory tradeoff, which has not been considered in much detail in this area. Benchmark tests reveal that there is a large difference in the memory consumption and performance between the different methods.
Super-resolution reconstruction of 4D-CT lung data via patch-based low-rank matrix reconstruction
Fang, Shiting; Wang, Huafeng; Liu, Yueliang; Zhang, Minghui; Yang, Wei; Feng, Qianjin; Chen, Wufan; Zhang, Yu
2017-10-01
Lung 4D computed tomography (4D-CT), which is a time-resolved CT data acquisition, performs an important role in explicitly including respiratory motion in treatment planning and delivery. However, the radiation dose is usually reduced at the expense of inter-slice spatial resolution to minimize radiation-related health risk. Therefore, resolution enhancement along the superior-inferior direction is necessary. In this paper, a super-resolution (SR) reconstruction method based on a patch low-rank matrix reconstruction is proposed to improve the resolution of lung 4D-CT images. Specifically, a low-rank matrix related to every patch is constructed by using a patch searching strategy. Thereafter, the singular value shrinkage is employed to recover the high-resolution patch under the constraints of the image degradation model. The output high-resolution patches are finally assembled to output the entire image. This method is extensively evaluated using two public data sets. Quantitative analysis shows that the proposed algorithm decreases the root mean square error by 9.7%-33.4% and the edge width by 11.4%-24.3%, relative to linear interpolation, back projection (BP) and Zhang et al’s algorithm. A new algorithm has been developed to improve the resolution of 4D-CT. In all experiments, the proposed method outperforms various interpolation methods, as well as BP and Zhang et al’s method, thus indicating the effectivity and competitiveness of the proposed algorithm.
Zhang, Haicang; Gao, Yujuan; Deng, Minghua; Wang, Chao; Zhu, Jianwei; Li, Shuai Cheng; Zheng, Wei-Mou; Bu, Dongbo
2016-03-25
Strategies for correlation analysis in protein contact prediction often encounter two challenges, namely, the indirect coupling among residues, and the background correlations mainly caused by phylogenetic biases. While various studies have been conducted on how to disentangle indirect coupling, the removal of background correlations still remains unresolved. Here, we present an approach for removing background correlations via low-rank and sparse decomposition (LRS) of a residue correlation matrix. The correlation matrix can be constructed using either local inference strategies (e.g., mutual information, or MI) or global inference strategies (e.g., direct coupling analysis, or DCA). In our approach, a correlation matrix was decomposed into two components, i.e., a low-rank component representing background correlations, and a sparse component representing true correlations. Finally the residue contacts were inferred from the sparse component of correlation matrix. We trained our LRS-based method on the PSICOV dataset, and tested it on both GREMLIN and CASP11 datasets. Our experimental results suggested that LRS significantly improves the contact prediction precision. For example, when equipped with the LRS technique, the prediction precision of MI and mfDCA increased from 0.25 to 0.67 and from 0.58 to 0.70, respectively (Top L/10 predicted contacts, sequence separation: 5 AA, dataset: GREMLIN). In addition, our LRS technique also consistently outperforms the popular denoising technique APC (average product correction), on both local (MI_LRS: 0.67 vs MI_APC: 0.34) and global measures (mfDCA_LRS: 0.70 vs mfDCA_APC: 0.67). Interestingly, we found out that when equipped with our LRS technique, local inference strategies performed in a comparable manner to that of global inference strategies, implying that the application of LRS technique narrowed down the performance gap between local and global inference strategies. Overall, our LRS technique greatly facilitates
Directory of Open Access Journals (Sweden)
Ichitaro Yamazaki
2015-01-01
of their low-rank properties. To compute a low-rank approximation of a dense matrix, in this paper, we study the performance of QR factorization with column pivoting or with restricted pivoting on multicore CPUs with a GPU. We first propose several techniques to reduce the postprocessing time, which is required for restricted pivoting, on a modern CPU. We then examine the potential of using a GPU to accelerate the factorization process with both column and restricted pivoting. Our performance results on two eight-core Intel Sandy Bridge CPUs with one NVIDIA Kepler GPU demonstrate that using the GPU, the factorization time can be reduced by a factor of more than two. In addition, to study the performance of our implementations in practice, we integrate them into a recently developed software StruMF which algebraically exploits such low-rank structures for solving a general sparse linear system of equations. Our performance results for solving Poisson's equations demonstrate that the proposed techniques can significantly reduce the preconditioner construction time of StruMF on the CPUs, and the construction time can be further reduced by 10%–50% using the GPU.
Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab
2014-08-25
We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.
Low rank magnetic resonance fingerprinting.
Mazor, Gal; Weizman, Lior; Tal, Assaf; Eldar, Yonina C
2016-08-01
Magnetic Resonance Fingerprinting (MRF) is a relatively new approach that provides quantitative MRI using randomized acquisition. Extraction of physical quantitative tissue values is preformed off-line, based on acquisition with varying parameters and a dictionary generated according to the Bloch equations. MRF uses hundreds of radio frequency (RF) excitation pulses for acquisition, and therefore high under-sampling ratio in the sampling domain (k-space) is required. This under-sampling causes spatial artifacts that hamper the ability to accurately estimate the quantitative tissue values. In this work, we introduce a new approach for quantitative MRI using MRF, called Low Rank MRF. We exploit the low rank property of the temporal domain, on top of the well-known sparsity of the MRF signal in the generated dictionary domain. We present an iterative scheme that consists of a gradient step followed by a low rank projection using the singular value decomposition. Experiments on real MRI data demonstrate superior results compared to conventional implementation of compressed sensing for MRF at 15% sampling ratio.
Low Rank Approximation Algorithms, Implementation, Applications
Markovsky, Ivan
2012-01-01
Matrix low-rank approximation is intimately related to data modelling; a problem that arises frequently in many different fields. Low Rank Approximation: Algorithms, Implementation, Applications is a comprehensive exposition of the theory, algorithms, and applications of structured low-rank approximation. Local optimization methods and effective suboptimal convex relaxations for Toeplitz, Hankel, and Sylvester structured problems are presented. A major part of the text is devoted to application of the theory. Applications described include: system and control theory: approximate realization, model reduction, output error, and errors-in-variables identification; signal processing: harmonic retrieval, sum-of-damped exponentials, finite impulse response modeling, and array processing; machine learning: multidimensional scaling and recommender system; computer vision: algebraic curve fitting and fundamental matrix estimation; bioinformatics for microarray data analysis; chemometrics for multivariate calibration; ...
Energy Technology Data Exchange (ETDEWEB)
Weber, G. F.; Laudal, D. L.
1989-01-01
This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).
Efficient Low Rank Tensor Ring Completion
Wang, Wenqi; Aggarwal, Vaneet; Aeron, Shuchin
2017-01-01
Using the matrix product state (MPS) representation of the recently proposed tensor ring decompositions, in this paper we propose a tensor completion algorithm, which is an alternating minimization algorithm that alternates over the factors in the MPS representation. This development is motivated in part by the success of matrix completion algorithms that alternate over the (low-rank) factors. In this paper, we propose a spectral initialization for the tensor ring completion algorithm and ana...
Texture Repairing by Unified Low Rank Optimization
Institute of Scientific and Technical Information of China (English)
Xiao Liang; Xiang Ren; Zhengdong Zhang; Yi Ma
2016-01-01
In this paper, we show how to harness both low-rank and sparse structures in regular or near-regular textures for image completion. Our method is based on a unified formulation for both random and contiguous corruption. In addition to the low rank property of texture, the algorithm also uses the sparse assumption of the natural image: because the natural image is piecewise smooth, it is sparse in certain transformed domain (such as Fourier or wavelet transform). We combine low-rank and sparsity properties of the texture image together in the proposed algorithm. Our algorithm based on convex optimization can automatically and correctly repair the global structure of a corrupted texture, even without precise information about the regions to be completed. This algorithm integrates texture rectification and repairing into one optimization problem. Through extensive simulations, we show our method can complete and repair textures corrupted by errors with both random and contiguous supports better than existing low-rank matrix recovery methods. Our method demonstrates significant advantage over local patch based texture synthesis techniques in dealing with large corruption, non-uniform texture, and large perspective deformation.
Low-rank quadratic semidefinite programming
Yuan, Ganzhao
2013-04-01
Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method. © 2012.
Low-rank quadratic semidefinite programming
Yuan, Ganzhao; Zhang, Zhenjie; Ghanem, Bernard; Hao, Zhifeng
2013-01-01
Low rank matrix approximation is an attractive model in large scale machine learning problems, because it can not only reduce the memory and runtime complexity, but also provide a natural way to regularize parameters while preserving learning accuracy. In this paper, we address a special class of nonconvex quadratic matrix optimization problems, which require a low rank positive semidefinite solution. Despite their non-convexity, we exploit the structure of these problems to derive an efficient solver that converges to their local optima. Furthermore, we show that the proposed solution is capable of dramatically enhancing the efficiency and scalability of a variety of concrete problems, which are of significant interest to the machine learning community. These problems include the Top-k Eigenvalue problem, Distance learning and Kernel learning. Extensive experiments on UCI benchmarks have shown the effectiveness and efficiency of our proposed method. © 2012.
Fabric defect detection based on visual saliency using deep feature and low-rank recovery
Liu, Zhoufeng; Wang, Baorui; Li, Chunlei; Li, Bicao; Dong, Yan
2018-04-01
Fabric defect detection plays an important role in improving the quality of fabric product. In this paper, a novel fabric defect detection method based on visual saliency using deep feature and low-rank recovery was proposed. First, unsupervised training is carried out by the initial network parameters based on MNIST large datasets. The supervised fine-tuning of fabric image library based on Convolutional Neural Networks (CNNs) is implemented, and then more accurate deep neural network model is generated. Second, the fabric images are uniformly divided into the image block with the same size, then we extract their multi-layer deep features using the trained deep network. Thereafter, all the extracted features are concentrated into a feature matrix. Third, low-rank matrix recovery is adopted to divide the feature matrix into the low-rank matrix which indicates the background and the sparse matrix which indicates the salient defect. In the end, the iterative optimal threshold segmentation algorithm is utilized to segment the saliency maps generated by the sparse matrix to locate the fabric defect area. Experimental results demonstrate that the feature extracted by CNN is more suitable for characterizing the fabric texture than the traditional LBP, HOG and other hand-crafted features extraction method, and the proposed method can accurately detect the defect regions of various fabric defects, even for the image with complex texture.
Weighted Discriminative Dictionary Learning based on Low-rank Representation
International Nuclear Information System (INIS)
Chang, Heyou; Zheng, Hao
2017-01-01
Low-rank representation has been widely used in the field of pattern classification, especially when both training and testing images are corrupted with large noise. Dictionary plays an important role in low-rank representation. With respect to the semantic dictionary, the optimal representation matrix should be block-diagonal. However, traditional low-rank representation based dictionary learning methods cannot effectively exploit the discriminative information between data and dictionary. To address this problem, this paper proposed weighted discriminative dictionary learning based on low-rank representation, where a weighted representation regularization term is constructed. The regularization associates label information of both training samples and dictionary atoms, and encourages to generate a discriminative representation with class-wise block-diagonal structure, which can further improve the classification performance where both training and testing images are corrupted with large noise. Experimental results demonstrate advantages of the proposed method over the state-of-the-art methods. (paper)
Low-Rank Sparse Coding for Image Classification
Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Xu, Changsheng; Ahuja, Narendra
2013-01-01
In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.
Low-Rank Sparse Coding for Image Classification
Zhang, Tianzhu
2013-12-01
In this paper, we propose a low-rank sparse coding (LRSC) method that exploits local structure information among features in an image for the purpose of image-level classification. LRSC represents densely sampled SIFT descriptors, in a spatial neighborhood, collectively as low-rank, sparse linear combinations of code words. As such, it casts the feature coding problem as a low-rank matrix learning problem, which is different from previous methods that encode features independently. This LRSC has a number of attractive properties. (1) It encourages sparsity in feature codes, locality in codebook construction, and low-rankness for spatial consistency. (2) LRSC encodes local features jointly by considering their low-rank structure information, and is computationally attractive. We evaluate the LRSC by comparing its performance on a set of challenging benchmarks with that of 7 popular coding and other state-of-the-art methods. Our experiments show that by representing local features jointly, LRSC not only outperforms the state-of-the-art in classification accuracy but also improves the time complexity of methods that use a similar sparse linear representation model for feature coding.
Low-rank sparse learning for robust visual tracking
Zhang, Tianzhu
2012-01-01
In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm capitalizes on the inherent low-rank structure of particle representations that are learned jointly. As such, it casts the tracking problem as a low-rank matrix learning problem. This low-rank sparse tracker (LRST) has a number of attractive properties. (1) Since LRST adaptively updates dictionary templates, it can handle significant changes in appearance due to variations in illumination, pose, scale, etc. (2) The linear representation in LRST explicitly incorporates background templates in the dictionary and a sparse error term, which enables LRST to address the tracking drift problem and to be robust against occlusion respectively. (3) LRST is computationally attractive, since the low-rank learning problem can be efficiently solved as a sequence of closed form update operations, which yield a time complexity that is linear in the number of particles and the template size. We evaluate the performance of LRST by applying it to a set of challenging video sequences and comparing it to 6 popular tracking methods. Our experiments show that by representing particles jointly, LRST not only outperforms the state-of-the-art in tracking accuracy but also significantly improves the time complexity of methods that use a similar sparse linear representation model for particles [1]. © 2012 Springer-Verlag.
Batched Tile Low-Rank GEMM on GPUs
Charara, Ali
2018-02-01
Dense General Matrix-Matrix (GEMM) multiplication is a core operation of the Basic Linear Algebra Subroutines (BLAS) library, and therefore, often resides at the bottom of the traditional software stack for most of the scientific applications. In fact, chip manufacturers give a special attention to the GEMM kernel implementation since this is exactly where most of the high-performance software libraries extract the hardware performance. With the emergence of big data applications involving large data-sparse, hierarchically low-rank matrices, the off-diagonal tiles can be compressed to reduce the algorithmic complexity and the memory footprint. The resulting tile low-rank (TLR) data format is composed of small data structures, which retains the most significant information for each tile. However, to operate on low-rank tiles, a new GEMM operation and its corresponding API have to be designed on GPUs so that it can exploit the data sparsity structure of the matrix while leveraging the underlying TLR compression format. The main idea consists in aggregating all operations onto a single kernel launch to compensate for their low arithmetic intensities and to mitigate the data transfer overhead on GPUs. The new TLR GEMM kernel outperforms the cuBLAS dense batched GEMM by more than an order of magnitude and creates new opportunities for TLR advance algorithms.
The optimized expansion based low-rank method for wavefield extrapolation
Wu, Zedong
2014-03-01
Spectral methods are fast becoming an indispensable tool for wavefield extrapolation, especially in anisotropic media because it tends to be dispersion and artifact free as well as highly accurate when solving the wave equation. However, for inhomogeneous media, we face difficulties in dealing with the mixed space-wavenumber domain extrapolation operator efficiently. To solve this problem, we evaluated an optimized expansion method that can approximate this operator with a low-rank variable separation representation. The rank defines the number of inverse Fourier transforms for each time extrapolation step, and thus, the lower the rank, the faster the extrapolation. The method uses optimization instead of matrix decomposition to find the optimal wavenumbers and velocities needed to approximate the full operator with its explicit low-rank representation. As a result, we obtain lower rank representations compared with the standard low-rank method within reasonable accuracy and thus cheaper extrapolations. Additional bounds set on the range of propagated wavenumbers to adhere to the physical wave limits yield unconditionally stable extrapolations regardless of the time step. An application on the BP model provided superior results compared to those obtained using the decomposition approach. For transversely isotopic media, because we used the pure P-wave dispersion relation, we obtained solutions that were free of the shear wave artifacts, and the algorithm does not require that n > 0. In addition, the required rank for the optimization approach to obtain high accuracy in anisotropic media was lower than that obtained by the decomposition approach, and thus, it was more efficient. A reverse time migration result for the BP tilted transverse isotropy model using this method as a wave propagator demonstrated the ability of the algorithm.
Low-rank driving in quantum systems
International Nuclear Information System (INIS)
Burkey, R.S.
1989-01-01
A new property of quantum systems called low-rank driving is introduced. Numerous simplifications in the solution of the time-dependent Schroedinger equation are pointed out for systems having this property. These simplifications are in the areas of finding eigenvalues, taking the Laplace transform, converting Schroedinger's equation to an integral form, discretizing the continuum, generalizing the Weisskopf-Wigner approximation, band-diagonalizing the Hamiltonian, finding new exact solutions to Schroedinger's equation, and so forth. The principal physical application considered is the phenomenon of coherent populations-trapping in continuum-continuum interactions
Low-Rank Linear Dynamical Systems for Motor Imagery EEG.
Zhang, Wenchang; Sun, Fuchun; Tan, Chuanqi; Liu, Shaobo
2016-01-01
The common spatial pattern (CSP) and other spatiospectral feature extraction methods have become the most effective and successful approaches to solve the problem of motor imagery electroencephalography (MI-EEG) pattern recognition from multichannel neural activity in recent years. However, these methods need a lot of preprocessing and postprocessing such as filtering, demean, and spatiospectral feature fusion, which influence the classification accuracy easily. In this paper, we utilize linear dynamical systems (LDSs) for EEG signals feature extraction and classification. LDSs model has lots of advantages such as simultaneous spatial and temporal feature matrix generation, free of preprocessing or postprocessing, and low cost. Furthermore, a low-rank matrix decomposition approach is introduced to get rid of noise and resting state component in order to improve the robustness of the system. Then, we propose a low-rank LDSs algorithm to decompose feature subspace of LDSs on finite Grassmannian and obtain a better performance. Extensive experiments are carried out on public dataset from "BCI Competition III Dataset IVa" and "BCI Competition IV Database 2a." The results show that our proposed three methods yield higher accuracies compared with prevailing approaches such as CSP and CSSP.
Low-Rank Kalman Filtering in Subsurface Contaminant Transport Models
El Gharamti, Mohamad
2010-12-01
Understanding the geology and the hydrology of the subsurface is important to model the fluid flow and the behavior of the contaminant. It is essential to have an accurate knowledge of the movement of the contaminants in the porous media in order to track them and later extract them from the aquifer. A two-dimensional flow model is studied and then applied on a linear contaminant transport model in the same porous medium. Because of possible different sources of uncertainties, the deterministic model by itself cannot give exact estimations for the future contaminant state. Incorporating observations in the model can guide it to the true state. This is usually done using the Kalman filter (KF) when the system is linear and the extended Kalman filter (EKF) when the system is nonlinear. To overcome the high computational cost required by the KF, we use the singular evolutive Kalman filter (SEKF) and the singular evolutive extended Kalman filter (SEEKF) approximations of the KF operating with low-rank covariance matrices. The SEKF can be implemented on large dimensional contaminant problems while the usage of the KF is not possible. Experimental results show that with perfect and imperfect models, the low rank filters can provide as much accurate estimates as the full KF but at much less computational cost. Localization can help the filter analysis as long as there are enough neighborhood data to the point being analyzed. Estimating the permeabilities of the aquifer is successfully tackled using both the EKF and the SEEKF.
Low-Rank Kalman Filtering in Subsurface Contaminant Transport Models
El Gharamti, Mohamad
2010-01-01
Understanding the geology and the hydrology of the subsurface is important to model the fluid flow and the behavior of the contaminant. It is essential to have an accurate knowledge of the movement of the contaminants in the porous media in order to track them and later extract them from the aquifer. A two-dimensional flow model is studied and then applied on a linear contaminant transport model in the same porous medium. Because of possible different sources of uncertainties, the deterministic model by itself cannot give exact estimations for the future contaminant state. Incorporating observations in the model can guide it to the true state. This is usually done using the Kalman filter (KF) when the system is linear and the extended Kalman filter (EKF) when the system is nonlinear. To overcome the high computational cost required by the KF, we use the singular evolutive Kalman filter (SEKF) and the singular evolutive extended Kalman filter (SEEKF) approximations of the KF operating with low-rank covariance matrices. The SEKF can be implemented on large dimensional contaminant problems while the usage of the KF is not possible. Experimental results show that with perfect and imperfect models, the low rank filters can provide as much accurate estimates as the full KF but at much less computational cost. Localization can help the filter analysis as long as there are enough neighborhood data to the point being analyzed. Estimating the permeabilities of the aquifer is successfully tackled using both the EKF and the SEEKF.
Tian, Shu; Zhang, Ye; Yan, Yiming; Su, Nan
2016-10-01
Segmentation of real-world remote sensing images is a challenge due to the complex texture information with high heterogeneity. Thus, graph-based image segmentation methods have been attracting great attention in the field of remote sensing. However, most of the traditional graph-based approaches fail to capture the intrinsic structure of the feature space and are sensitive to noises. A ℓ-norm regularization-based graph segmentation method is proposed to segment remote sensing images. First, we use the occlusion of the random texture model (ORTM) to extract the local histogram features. Then, a ℓ-norm regularized low-rank and sparse representation (LNNLRS) is implemented to construct a ℓ-regularized nonnegative low-rank and sparse graph (LNNLRS-graph), by the union of feature subspaces. Moreover, the LNNLRS-graph has a high ability to discriminate the manifold intrinsic structure of highly homogeneous texture information. Meanwhile, the LNNLRS representation takes advantage of the low-rank and sparse characteristics to remove the noises and corrupted data. Last, we introduce the LNNLRS-graph into the graph regularization nonnegative matrix factorization to enhance the segmentation accuracy. The experimental results using remote sensing images show that when compared to five state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.
Beyond Low Rank: A Data-Adaptive Tensor Completion Method
Zhang, Lei; Wei, Wei; Shi, Qinfeng; Shen, Chunhua; Hengel, Anton van den; Zhang, Yanning
2017-01-01
Low rank tensor representation underpins much of recent progress in tensor completion. In real applications, however, this approach is confronted with two challenging problems, namely (1) tensor rank determination; (2) handling real tensor data which only approximately fulfils the low-rank requirement. To address these two issues, we develop a data-adaptive tensor completion model which explicitly represents both the low-rank and non-low-rank structures in a latent tensor. Representing the no...
Proceedings of the sixteenth biennial low-rank fuels symposium
International Nuclear Information System (INIS)
1991-01-01
Low-rank coals represent a major energy resource for the world. The Low-Rank Fuels Symposium, building on the traditions established by the Lignite Symposium, focuses on the key opportunities for this resource. This conference offers a forum for leaders from industry, government, and academia to gather to share current information on the opportunities represented by low-rank coals. In the United States and throughout the world, the utility industry is the primary user of low-rank coals. As such, current experiences and future opportunities for new technologies in this industry were the primary focuses of the symposium
Proceedings of the sixteenth biennial low-rank fuels symposium
Energy Technology Data Exchange (ETDEWEB)
1991-01-01
Low-rank coals represent a major energy resource for the world. The Low-Rank Fuels Symposium, building on the traditions established by the Lignite Symposium, focuses on the key opportunities for this resource. This conference offers a forum for leaders from industry, government, and academia to gather to share current information on the opportunities represented by low-rank coals. In the United States and throughout the world, the utility industry is the primary user of low-rank coals. As such, current experiences and future opportunities for new technologies in this industry were the primary focuses of the symposium.
Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition
Directory of Open Access Journals (Sweden)
yuan Shuai
2017-01-01
Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.
Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Keyes, David E.
2017-01-01
Covariance matrices are ubiquitous in computational science and engineering. In particular, large covariance matrices arise from multivariate spatial data sets, for instance, in climate/weather modeling applications to improve prediction using statistical methods and spatial data. One of the most time-consuming computational steps consists in calculating the Cholesky factorization of the symmetric, positive-definite covariance matrix problem. The structure of such covariance matrices is also often data-sparse, in other words, effectively of low rank, though formally dense. While not typically globally of low rank, covariance matrices in which correlation decays with distance are nearly always hierarchically of low rank. While symmetry and positive definiteness should be, and nearly always are, exploited for performance purposes, exploiting low rank character in this context is very recent, and will be a key to solving these challenging problems at large-scale dimensions. The authors design a new and flexible tile row rank Cholesky factorization and propose a high performance implementation using OpenMP task-based programming model on various leading-edge manycore architectures. Performance comparisons and memory footprint saving on up to 200K×200K covariance matrix size show a gain of more than an order of magnitude for both metrics, against state-of-the-art open-source and vendor optimized numerical libraries, while preserving the numerical accuracy fidelity of the original model. This research represents an important milestone in enabling large-scale simulations for covariance-based scientific applications.
Akbudak, Kadir
2017-05-11
Covariance matrices are ubiquitous in computational science and engineering. In particular, large covariance matrices arise from multivariate spatial data sets, for instance, in climate/weather modeling applications to improve prediction using statistical methods and spatial data. One of the most time-consuming computational steps consists in calculating the Cholesky factorization of the symmetric, positive-definite covariance matrix problem. The structure of such covariance matrices is also often data-sparse, in other words, effectively of low rank, though formally dense. While not typically globally of low rank, covariance matrices in which correlation decays with distance are nearly always hierarchically of low rank. While symmetry and positive definiteness should be, and nearly always are, exploited for performance purposes, exploiting low rank character in this context is very recent, and will be a key to solving these challenging problems at large-scale dimensions. The authors design a new and flexible tile row rank Cholesky factorization and propose a high performance implementation using OpenMP task-based programming model on various leading-edge manycore architectures. Performance comparisons and memory footprint saving on up to 200K×200K covariance matrix size show a gain of more than an order of magnitude for both metrics, against state-of-the-art open-source and vendor optimized numerical libraries, while preserving the numerical accuracy fidelity of the original model. This research represents an important milestone in enabling large-scale simulations for covariance-based scientific applications.
Multi-Label Classiﬁcation Based on Low Rank Representation for Image Annotation
Directory of Open Access Journals (Sweden)
Qiaoyu Tan
2017-01-01
Full Text Available Annotating remote sensing images is a challenging task for its labor demanding annotation process and requirement of expert knowledge, especially when images can be annotated with multiple semantic concepts (or labels. To automatically annotate these multi-label images, we introduce an approach called Multi-Label Classification based on Low Rank Representation (MLC-LRR. MLC-LRR firstly utilizes low rank representation in the feature space of images to compute the low rank constrained coefficient matrix, then it adapts the coefficient matrix to define a feature-based graph and to capture the global relationships between images. Next, it utilizes low rank representation in the label space of labeled images to construct a semantic graph. Finally, these two graphs are exploited to train a graph-based multi-label classifier. To validate the performance of MLC-LRR against other related graph-based multi-label methods in annotating images, we conduct experiments on a public available multi-label remote sensing images (Land Cover. We perform additional experiments on five real-world multi-label image datasets to further investigate the performance of MLC-LRR. Empirical study demonstrates that MLC-LRR achieves better performance on annotating images than these comparing methods across various evaluation criteria; it also can effectively exploit global structure and label correlations of multi-label images.
On low-rank updates to the singular value and Tucker decompositions
Energy Technology Data Exchange (ETDEWEB)
O' Hara, M J
2009-10-06
The singular value decomposition is widely used in signal processing and data mining. Since the data often arrives in a stream, the problem of updating matrix decompositions under low-rank modification has been widely studied. Brand developed a technique in 2006 that has many advantages. However, the technique does not directly approximate the updated matrix, but rather its previous low-rank approximation added to the new update, which needs justification. Further, the technique is still too slow for large information processing problems. We show that the technique minimizes the change in error per update, so if the error is small initially it remains small. We show that an updating algorithm for large sparse matrices should be sub-linear in the matrix dimension in order to be practical for large problems, and demonstrate a simple modification to the original technique that meets the requirements.
Fast Low-Rank Shared Dictionary Learning for Image Classification.
Tiep Huu Vu; Monga, Vishal
2017-11-01
Despite the fact that different objects possess distinct class-specific features, they also usually share common patterns. This observation has been exploited partially in a recently proposed dictionary learning framework by separating the particularity and the commonality (COPAR). Inspired by this, we propose a novel method to explicitly and simultaneously learn a set of common patterns as well as class-specific features for classification with more intuitive constraints. Our dictionary learning framework is hence characterized by both a shared dictionary and particular (class-specific) dictionaries. For the shared dictionary, we enforce a low-rank constraint, i.e., claim that its spanning subspace should have low dimension and the coefficients corresponding to this dictionary should be similar. For the particular dictionaries, we impose on them the well-known constraints stated in the Fisher discrimination dictionary learning (FDDL). Furthermore, we develop new fast and accurate algorithms to solve the subproblems in the learning step, accelerating its convergence. The said algorithms could also be applied to FDDL and its extensions. The efficiencies of these algorithms are theoretically and experimentally verified by comparing their complexities and running time with those of other well-known dictionary learning methods. Experimental results on widely used image data sets establish the advantages of our method over the state-of-the-art dictionary learning methods.
Pyrolysis characteristics and kinetics of low rank coals by distributed activation energy model
International Nuclear Information System (INIS)
Song, Huijuan; Liu, Guangrui; Wu, Jinhu
2016-01-01
Highlights: • Types of carbon in coal structure were investigated by curve-fitted "1"3C NMR spectra. • The work related pyrolysis characteristics and kinetics with coal structure. • Pyrolysis kinetics of low rank coals were studied by DAEM with Miura integral method. • DAEM could supply accurate extrapolations under relatively higher heating rates. - Abstract: The work was conducted to investigate pyrolysis characteristics and kinetics of low rank coals relating with coal structure by thermogravimetric analysis (TGA), the distributed activation energy model (DAEM) and solid-state "1"3C Nuclear Magnetic Resonance (NMR). Four low rank coals selected from different mines in China were studied in the paper. TGA was carried out with a non-isothermal temperature program in N_2 at the heating rate of 5, 10, 20 and 30 °C/min to estimate pyrolysis processes of coal samples. The results showed that corresponding characteristic temperatures and the maximum mass loss rates increased as heating rate increased. Pyrolysis kinetics parameters were investigated by the DAEM using Miura integral method. The DAEM was accurate verified by the good fit between the experimental and calculated curves of conversion degree x at the selected heating rates and relatively higher heating rates. The average activation energy was 331 kJ/mol (coal NM), 298 kJ/mol (coal NX), 302 kJ/mol (coal HLJ) and 196 kJ/mol (coal SD), respectively. The curve-fitting analysis of "1"3C NMR spectra was performed to characterize chemical structures of low rank coals. The results showed that various types of carbon functional groups with different relative contents existed in coal structure. The work indicated that pyrolysis characteristics and kinetics of low rank coals were closely associated with their chemical structures.
Jiang, Wenwen; Larson, Peder E Z; Lustig, Michael
2018-03-09
To correct gradient timing delays in non-Cartesian MRI while simultaneously recovering corruption-free auto-calibration data for parallel imaging, without additional calibration scans. The calibration matrix constructed from multi-channel k-space data should be inherently low-rank. This property is used to construct reconstruction kernels or sensitivity maps. Delays between the gradient hardware across different axes and RF receive chain, which are relatively benign in Cartesian MRI (excluding EPI), lead to trajectory deviations and hence data inconsistencies for non-Cartesian trajectories. These in turn lead to higher rank and corrupted calibration information which hampers the reconstruction. Here, a method named Simultaneous Auto-calibration and Gradient delays Estimation (SAGE) is proposed that estimates the actual k-space trajectory while simultaneously recovering the uncorrupted auto-calibration data. This is done by estimating the gradient delays that result in the lowest rank of the calibration matrix. The Gauss-Newton method is used to solve the non-linear problem. The method is validated in simulations using center-out radial, projection reconstruction and spiral trajectories. Feasibility is demonstrated on phantom and in vivo scans with center-out radial and projection reconstruction trajectories. SAGE is able to estimate gradient timing delays with high accuracy at a signal to noise ratio level as low as 5. The method is able to effectively remove artifacts resulting from gradient timing delays and restore image quality in center-out radial, projection reconstruction, and spiral trajectories. The low-rank based method introduced simultaneously estimates gradient timing delays and provides accurate auto-calibration data for improved image quality, without any additional calibration scans. © 2018 International Society for Magnetic Resonance in Medicine.
Low-rank and sparse modeling for visual analysis
Fu, Yun
2014-01-01
This book provides a view of low-rank and sparse computing, especially approximation, recovery, representation, scaling, coding, embedding and learning among unconstrained visual data. The book includes chapters covering multiple emerging topics in this new field. It links multiple popular research fields in Human-Centered Computing, Social Media, Image Classification, Pattern Recognition, Computer Vision, Big Data, and Human-Computer Interaction. Contains an overview of the low-rank and sparse modeling techniques for visual analysis by examining both theoretical analysis and real-world applic
El Gharamti, Mohamad
2012-04-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model in the same porous medium. Because of different sources of uncertainties, this coupled model might not be able to accurately track the contaminant state. Incorporating observations through the process of data assimilation can guide the model toward the true trajectory of the system. The Kalman filter (KF), or its nonlinear invariants, can be used to tackle this problem. To overcome the prohibitive computational cost of the KF, the singular evolutive Kalman filter (SEKF) and the singular fixed Kalman filter (SFKF) are used, which are variants of the KF operating with low-rank covariance matrices. Experimental results suggest that under perfect and imperfect model setups, the low-rank filters can provide estimates as accurate as the full KF but at much lower computational effort. Low-rank filters are demonstrated to significantly reduce the computational effort of the KF to almost 3%. © 2012 American Society of Civil Engineers.
Robust Visual Tracking Via Consistent Low-Rank Sparse Learning
Zhang, Tianzhu; Liu, Si; Ahuja, Narendra; Yang, Ming-Hsuan; Ghanem, Bernard
2014-01-01
and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25
Tang, Xin; Feng, Guo-Can; Li, Xiao-Xin; Cai, Jia-Xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the
Directory of Open Access Journals (Sweden)
Xin Tang
Full Text Available Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC. Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our
Tang, Xin; Feng, Guo-can; Li, Xiao-xin; Cai, Jia-xin
2015-01-01
Face recognition is challenging especially when the images from different persons are similar to each other due to variations in illumination, expression, and occlusion. If we have sufficient training images of each person which can span the facial variations of that person under testing conditions, sparse representation based classification (SRC) achieves very promising results. However, in many applications, face recognition often encounters the small sample size problem arising from the small number of available training images for each person. In this paper, we present a novel face recognition framework by utilizing low-rank and sparse error matrix decomposition, and sparse coding techniques (LRSE+SC). Firstly, the low-rank matrix recovery technique is applied to decompose the face images per class into a low-rank matrix and a sparse error matrix. The low-rank matrix of each individual is a class-specific dictionary and it captures the discriminative feature of this individual. The sparse error matrix represents the intra-class variations, such as illumination, expression changes. Secondly, we combine the low-rank part (representative basis) of each person into a supervised dictionary and integrate all the sparse error matrix of each individual into a within-individual variant dictionary which can be applied to represent the possible variations between the testing and training images. Then these two dictionaries are used to code the query image. The within-individual variant dictionary can be shared by all the subjects and only contribute to explain the lighting conditions, expressions, and occlusions of the query image rather than discrimination. At last, a reconstruction-based scheme is adopted for face recognition. Since the within-individual dictionary is introduced, LRSE+SC can handle the problem of the corrupted training data and the situation that not all subjects have enough samples for training. Experimental results show that our method achieves the
Color correction with blind image restoration based on multiple images using a low-rank model
Li, Dong; Xie, Xudong; Lam, Kin-Man
2014-03-01
We present a method that can handle the color correction of multiple photographs with blind image restoration simultaneously and automatically. We prove that the local colors of a set of images of the same scene exhibit the low-rank property locally both before and after a color-correction operation. This property allows us to correct all kinds of errors in an image under a low-rank matrix model without particular priors or assumptions. The possible errors may be caused by changes of viewpoint, large illumination variations, gross pixel corruptions, partial occlusions, etc. Furthermore, a new iterative soft-segmentation method is proposed for local color transfer using color influence maps. Due to the fact that the correct color information and the spatial information of images can be recovered using the low-rank model, more precise color correction and many other image-restoration tasks-including image denoising, image deblurring, and gray-scale image colorizing-can be performed simultaneously. Experiments have verified that our method can achieve consistent and promising results on uncontrolled real photographs acquired from the Internet and that it outperforms current state-of-the-art methods.
Robust Visual Tracking Via Consistent Low-Rank Sparse Learning
Zhang, Tianzhu
2014-06-19
Object tracking is the process of determining the states of a target in consecutive video frames based on properties of motion and appearance consistency. In this paper, we propose a consistent low-rank sparse tracker (CLRST) that builds upon the particle filter framework for tracking. By exploiting temporal consistency, the proposed CLRST algorithm adaptively prunes and selects candidate particles. By using linear sparse combinations of dictionary templates, the proposed method learns the sparse representations of image regions corresponding to candidate particles jointly by exploiting the underlying low-rank constraints. In addition, the proposed CLRST algorithm is computationally attractive since temporal consistency property helps prune particles and the low-rank minimization problem for learning joint sparse representations can be efficiently solved by a sequence of closed form update operations. We evaluate the proposed CLRST algorithm against 14 state-of-the-art tracking methods on a set of 25 challenging image sequences. Experimental results show that the CLRST algorithm performs favorably against state-of-the-art tracking methods in terms of accuracy and execution time.
Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging.
Directory of Open Access Journals (Sweden)
Xingjian Yu
Full Text Available In dynamic Positron Emission Tomography (PET, an estimate of the radio activity concentration is obtained from a series of frames of sinogram data taken at ranging in duration from 10 seconds to minutes under some criteria. So far, all the well-known reconstruction algorithms require known data statistical properties. It limits the speed of data acquisition, besides, it is unable to afford the separated information about the structure and the variation of shape and rate of metabolism which play a major role in improving the visualization of contrast for some requirement of the diagnosing in application. This paper presents a novel low rank-based activity map reconstruction scheme from emission sinograms of dynamic PET, termed as SLCR representing Sparse/Low Rank Constrained Reconstruction for Dynamic PET Imaging. In this method, the stationary background is formulated as a low rank component while variations between successive frames are abstracted to the sparse. The resulting nuclear norm and l1 norm related minimization problem can also be efficiently solved by many recently developed numerical methods. In this paper, the linearized alternating direction method is applied. The effectiveness of the proposed scheme is illustrated on three data sets.
Robust subspace estimation using low-rank optimization theory and applications
Oreifej, Omar
2014-01-01
Various fundamental applications in computer vision and machine learning require finding the basis of a certain subspace. Examples of such applications include face detection, motion estimation, and activity recognition. An increasing interest has been recently placed on this area as a result of significant advances in the mathematics of matrix rank optimization. Interestingly, robust subspace estimation can be posed as a low-rank optimization problem, which can be solved efficiently using techniques such as the method of Augmented Lagrange Multiplier. In this book,?the authors?discuss fundame
Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable
Energy Technology Data Exchange (ETDEWEB)
Menkov, V. [Indiana Univ., Bloomington, IN (United States)
1996-12-31
An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.
Global sensitivity analysis using low-rank tensor approximations
International Nuclear Information System (INIS)
Konakli, Katerina; Sudret, Bruno
2016-01-01
In the context of global sensitivity analysis, the Sobol' indices constitute a powerful tool for assessing the relative significance of the uncertain input parameters of a model. We herein introduce a novel approach for evaluating these indices at low computational cost, by post-processing the coefficients of polynomial meta-models belonging to the class of low-rank tensor approximations. Meta-models of this class can be particularly efficient in representing responses of high-dimensional models, because the number of unknowns in their general functional form grows only linearly with the input dimension. The proposed approach is validated in example applications, where the Sobol' indices derived from the meta-model coefficients are compared to reference indices, the latter obtained by exact analytical solutions or Monte-Carlo simulation with extremely large samples. Moreover, low-rank tensor approximations are confronted to the popular polynomial chaos expansion meta-models in case studies that involve analytical rank-one functions and finite-element models pertinent to structural mechanics and heat conduction. In the examined applications, indices based on the novel approach tend to converge faster to the reference solution with increasing size of the experimental design used to build the meta-model. - Highlights: • A new method is proposed for global sensitivity analysis of high-dimensional models. • Low-rank tensor approximations (LRA) are used as a meta-modeling technique. • Analytical formulas for the Sobol' indices in terms of LRA coefficients are derived. • The accuracy and efficiency of the approach is illustrated in application examples. • LRA-based indices are compared to indices based on polynomial chaos expansions.
A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.
Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong
2017-10-01
There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Meiting Yu
2018-02-01
Full Text Available The extraction of a valuable set of features and the design of a discriminative classifier are crucial for target recognition in SAR image. Although various features and classifiers have been proposed over the years, target recognition under extended operating conditions (EOCs is still a challenging problem, e.g., target with configuration variation, different capture orientations, and articulation. To address these problems, this paper presents a new strategy for target recognition. We first propose a low-dimensional representation model via incorporating multi-manifold regularization term into the low-rank matrix factorization framework. Two rules, pairwise similarity and local linearity, are employed for constructing multiple manifold regularization. By alternately optimizing the matrix factorization and manifold selection, the feature representation model can not only acquire the optimal low-rank approximation of original samples, but also capture the intrinsic manifold structure information. Then, to take full advantage of the local structure property of features and further improve the discriminative ability, local sparse representation is proposed for classification. Finally, extensive experiments on moving and stationary target acquisition and recognition (MSTAR database demonstrate the effectiveness of the proposed strategy, including target recognition under EOCs, as well as the capability of small training size.
Tensor Factorization for Low-Rank Tensor Completion.
Zhou, Pan; Lu, Canyi; Lin, Zhouchen; Zhang, Chao
2018-03-01
Recently, a tensor nuclear norm (TNN) based method was proposed to solve the tensor completion problem, which has achieved state-of-the-art performance on image and video inpainting tasks. However, it requires computing tensor singular value decomposition (t-SVD), which costs much computation and thus cannot efficiently handle tensor data, due to its natural large scale. Motivated by TNN, we propose a novel low-rank tensor factorization method for efficiently solving the 3-way tensor completion problem. Our method preserves the low-rank structure of a tensor by factorizing it into the product of two tensors of smaller sizes. In the optimization process, our method only needs to update two smaller tensors, which can be more efficiently conducted than computing t-SVD. Furthermore, we prove that the proposed alternating minimization algorithm can converge to a Karush-Kuhn-Tucker point. Experimental results on the synthetic data recovery, image and video inpainting tasks clearly demonstrate the superior performance and efficiency of our developed method over state-of-the-arts including the TNN and matricization methods.
Improved magnetic resonance fingerprinting reconstruction with low-rank and subspace modeling.
Zhao, Bo; Setsompop, Kawin; Adalsteinsson, Elfar; Gagoski, Borjan; Ye, Huihui; Ma, Dan; Jiang, Yun; Ellen Grant, P; Griswold, Mark A; Wald, Lawrence L
2018-02-01
This article introduces a constrained imaging method based on low-rank and subspace modeling to improve the accuracy and speed of MR fingerprinting (MRF). A new model-based imaging method is developed for MRF to reconstruct high-quality time-series images and accurate tissue parameter maps (e.g., T 1 , T 2 , and spin density maps). Specifically, the proposed method exploits low-rank approximations of MRF time-series images, and further enforces temporal subspace constraints to capture magnetization dynamics. This allows the time-series image reconstruction problem to be formulated as a simple linear least-squares problem, which enables efficient computation. After image reconstruction, tissue parameter maps are estimated via dictionary-based pattern matching, as in the conventional approach. The effectiveness of the proposed method was evaluated with in vivo experiments. Compared with the conventional MRF reconstruction, the proposed method reconstructs time-series images with significantly reduced aliasing artifacts and noise contamination. Although the conventional approach exhibits some robustness to these corruptions, the improved time-series image reconstruction in turn provides more accurate tissue parameter maps. The improvement is pronounced especially when the acquisition time becomes short. The proposed method significantly improves the accuracy of MRF, and also reduces data acquisition time. Magn Reson Med 79:933-942, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Manifold Based Low-rank Regularization for Image Restoration and Semi-supervised Learning
Lai, Rongjie; Li, Jia
2017-01-01
Low-rank structures play important role in recent advances of many problems in image science and data science. As a natural extension of low-rank structures for data with nonlinear structures, the concept of the low-dimensional manifold structure has been considered in many data processing problems. Inspired by this concept, we consider a manifold based low-rank regularization as a linear approximation of manifold dimension. This regularization is less restricted than the global low-rank regu...
Direct liquefaction of low-rank coals under mild conditions
Energy Technology Data Exchange (ETDEWEB)
Braun, N.; Rinaldi, R. [Max-Planck-Institut fuer Kohlenforschung, Muelheim an der Ruhr (Germany)
2013-11-01
Due to decreasing of petroleum reserves, direct coal liquefaction is attracting renewed interest as an alternative process to produce liquid fuels. The combination of hydrogen peroxide and coal is not a new one. In the early 1980, Vasilakos and Clinton described a procedure for desulfurization by leaching coal with solutions of sulphuric acid/H{sub 2}O{sub 2}. But so far, H{sub 2}O{sub 2} has never been ascribed a major role in coal liquefaction. Herein, we describe a novel approach for liquefying low-rank coals using a solution of H{sub 2}O{sub 2} in presence of a soluble non-transition metal catalyst. (orig.)
Weighted Low-Rank Approximation of Matrices and Background Modeling
Dutta, Aritra
2018-04-15
We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.
Weighted Low-Rank Approximation of Matrices and Background Modeling
Dutta, Aritra; Li, Xin; Richtarik, Peter
2018-01-01
We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the $\\ell_1$ norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.
Tian, Shu; Zhang, Ye; Yan, Yimin; Su, Nan; Zhang, Junping
2016-09-01
Latent low-rank representation (LatLRR) has been attached considerable attention in the field of remote sensing image segmentation, due to its effectiveness in exploring the multiple subspace structures of data. However, the increasingly heterogeneous texture information in the high spatial resolution remote sensing images, leads to more severe interference of pixels in local neighborhood, and the LatLRR fails to capture the local complex structure information. Therefore, we present a local sparse structure constrainted latent low-rank representation (LSSLatLRR) segmentation method, which explicitly imposes the local sparse structure constraint on LatLRR to capture the intrinsic local structure in manifold structure feature subspaces. The whole segmentation framework can be viewed as two stages in cascade. In the first stage, we use the local histogram transform to extract the texture local histogram features (LHOG) at each pixel, which can efficiently capture the complex and micro-texture pattern. In the second stage, a local sparse structure (LSS) formulation is established on LHOG, which aims to preserve the local intrinsic structure and enhance the relationship between pixels having similar local characteristics. Meanwhile, by integrating the LSS and the LatLRR, we can efficiently capture the local sparse and low-rank structure in the mixture of feature subspace, and we adopt the subspace segmentation method to improve the segmentation accuracy. Experimental results on the remote sensing images with different spatial resolution show that, compared with three state-of-the-art image segmentation methods, the proposed method achieves more accurate segmentation results.
Petrov, Andrii Y; Herbst, Michael; Andrew Stenger, V
2017-08-15
Rapid whole-brain dynamic Magnetic Resonance Imaging (MRI) is of particular interest in Blood Oxygen Level Dependent (BOLD) functional MRI (fMRI). Faster acquisitions with higher temporal sampling of the BOLD time-course provide several advantages including increased sensitivity in detecting functional activation, the possibility of filtering out physiological noise for improving temporal SNR, and freezing out head motion. Generally, faster acquisitions require undersampling of the data which results in aliasing artifacts in the object domain. A recently developed low-rank (L) plus sparse (S) matrix decomposition model (L+S) is one of the methods that has been introduced to reconstruct images from undersampled dynamic MRI data. The L+S approach assumes that the dynamic MRI data, represented as a space-time matrix M, is a linear superposition of L and S components, where L represents highly spatially and temporally correlated elements, such as the image background, while S captures dynamic information that is sparse in an appropriate transform domain. This suggests that L+S might be suited for undersampled task or slow event-related fMRI acquisitions because the periodic nature of the BOLD signal is sparse in the temporal Fourier transform domain and slowly varying low-rank brain background signals, such as physiological noise and drift, will be predominantly low-rank. In this work, as a proof of concept, we exploit the L+S method for accelerating block-design fMRI using a 3D stack of spirals (SoS) acquisition where undersampling is performed in the k z -t domain. We examined the feasibility of the L+S method to accurately separate temporally correlated brain background information in the L component while capturing periodic BOLD signals in the S component. We present results acquired in control human volunteers at 3T for both retrospective and prospectively acquired fMRI data for a visual activation block-design task. We show that a SoS fMRI acquisition with an
Modeling of pseudoacoustic P-waves in orthorhombic media with a low-rank approximation
Song, Xiaolei
2013-06-04
Wavefield extrapolation in pseudoacoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We use the dispersion relation for scalar wave propagation in pseudoacoustic orthorhombic media to model acoustic wavefields. The wavenumber-domain application of the Laplacian operator allows us to propagate the P-waves exclusively, without imposing any conditions on the parameter range of stability. It also allows us to avoid dispersion artifacts commonly associated with evaluating the Laplacian operator in space domain using practical finite-difference stencils. To handle the corresponding space-wavenumber mixed-domain operator, we apply the low-rank approximation approach. Considering the number of parameters necessary to describe orthorhombic anisotropy, the low-rank approach yields space-wavenumber decomposition of the extrapolator operator that is dependent on space location regardless of the parameters, a feature necessary for orthorhombic anisotropy. Numerical experiments that the proposed wavefield extrapolator is accurate and practically free of dispersion. Furthermore, there is no coupling of qSv and qP waves because we use the analytical dispersion solution corresponding to the P-wave.
Catalytic briquettes from low-rank coal for NO reduction
Energy Technology Data Exchange (ETDEWEB)
A. Boyano; M.E. Galvez; R. Moliner; M.J. Lazaro [Instituto de Carboquimica, CSIC, Zaragoza (Spain)
2007-07-01
The briquetting is one of the most ancient and widespread techniques of coal agglomeration which is nowadays becoming useless for combustion home applications. However, the social increasing interest in environmental protection opens new applications to this technique, especially in developed countries. In this work, a series of catalytic briquettes were prepared from low-rank Spanish coal and commercial pitch by means of a pressure agglomeration method. After that, they were cured in air and doped by equilibrium impregnation with vanadium compounds. Preparation conditions (especially those of activation and oxidizing process) were changed to study their effects on catalytic behaviour. Catalytic briquettes showed a relative high NO conversion at low temperatures in all cases, however, a strong relation between the preparation process and the reached NO conversion was observed. Preparation procedure has an effect not only on the NO reduction efficiency but also on the mechanical strength of the briquettes as a consequence of the structural and chemical changes carried out during the activation and oxidation procedures. Generally speaking mechanical resistance is enhanced by an optimal porous volume and the creation of new carboxyl groups on surface. Just on the contrary, NO reduction is promoted by high microporous structures and higher amounts of surface oxygen groups. Both facts force to find an optimum point in the preparation produce which will depend on the application. 24 refs., 4 figs., 3 tabs.
Carbon-free hydrogen production from low rank coal
Aziz, Muhammad; Oda, Takuya; Kashiwagi, Takao
2018-02-01
Novel carbon-free integrated system of hydrogen production and storage from low rank coal is proposed and evaluated. To measure the optimum energy efficiency, two different systems employing different chemical looping technologies are modeled. The first integrated system consists of coal drying, gasification, syngas chemical looping, and hydrogenation. On the other hand, the second system combines coal drying, coal direct chemical looping, and hydrogenation. In addition, in order to cover the consumed electricity and recover the energy, combined cycle is adopted as addition module for power generation. The objective of the study is to find the best system having the highest performance in terms of total energy efficiency, including hydrogen production efficiency and power generation efficiency. To achieve a thorough energy/heat circulation throughout each module and the whole integrated system, enhanced process integration technology is employed. It basically incorporates two core basic technologies: exergy recovery and process integration. Several operating parameters including target moisture content in drying module, operating pressure in chemical looping module, are observed in terms of their influence to energy efficiency. From process modeling and calculation, two integrated systems can realize high total energy efficiency, higher than 60%. However, the system employing coal direct chemical looping represents higher energy efficiency, including hydrogen production and power generation, which is about 83%. In addition, optimum target moisture content in drying and operating pressure in chemical looping also have been defined.
Enhancing Low-Rank Subspace Clustering by Manifold Regularization.
Liu, Junmin; Chen, Yijun; Zhang, JiangShe; Xu, Zongben
2014-07-25
Recently, low-rank representation (LRR) method has achieved great success in subspace clustering (SC), which aims to cluster the data points that lie in a union of low-dimensional subspace. Given a set of data points, LRR seeks the lowest rank representation among the many possible linear combinations of the bases in a given dictionary or in terms of the data itself. However, LRR only considers the global Euclidean structure, while the local manifold structure, which is often important for many real applications, is ignored. In this paper, to exploit the local manifold structure of the data, a manifold regularization characterized by a Laplacian graph has been incorporated into LRR, leading to our proposed Laplacian regularized LRR (LapLRR). An efficient optimization procedure, which is based on alternating direction method of multipliers (ADMM), is developed for LapLRR. Experimental results on synthetic and real data sets are presented to demonstrate that the performance of LRR has been enhanced by using the manifold regularization.
Kohn, Lucas; Tschirsich, Ferdinand; Keck, Maximilian; Plenio, Martin B.; Tamascelli, Dario; Montangero, Simone
2018-01-01
We provide evidence that randomized low-rank factorization is a powerful tool for the determination of the ground-state properties of low-dimensional lattice Hamiltonians through tensor network techniques. In particular, we show that randomized matrix factorization outperforms truncated singular value decomposition based on state-of-the-art deterministic routines in time-evolving block decimation (TEBD)- and density matrix renormalization group (DMRG)-style simulations, even when the system under study gets close to a phase transition: We report linear speedups in the bond or local dimension of up to 24 times in quasi-two-dimensional cylindrical systems.
Assessment of low-rank (LRC) drying technologies
International Nuclear Information System (INIS)
Willson, W.G.; Young, B.C.; Irwinj, W.
1992-01-01
This paper reports that low-rank coals (LRCs), brown, lignitic, and subbituminous coals, represent nearly half of the estimated coal resources in the world. In many of the developing nations, LRCs are the only source of low-cost energy. LRCs are geologically younger than higher-rank bituminous coals and are typically present in thick seams with less cover (overburden) than bituminous coals, making them recoverable by low-cost strip mining. Current pit-head coal prices for LRCs range from a low of around $0.25 per MM Btus for subbituminous coals from the USA's Powder River Basin, to highs of around $1,00 for those that are more costly to mine. On the other hand, the pit-head price of bituminous coals in the USA range from a low of around $1 to over $2 per MM Btu. Unfortunately, this differential in favor of LRC is more than offset in distant markers where, until now, it has been considered a nuisance. Often less than half of its weight is combustible, the rest being water and ash. Thus the cost of hauling it any distance at all in its untreated dry bulk form is prohibitive. However, from a utilization aspect, LRCs have a lower fuel ration (fixed carbon to volatile matter) and are typically an order of magnitude more reactive than bituminous coals. Many LRCs, including the enormous reserves in Alaska, Australia, and Indonesia, also have extremely low sulfur contents of only a few tenths of a percent. Low mining costs, high reactivity, and extremely low sulfur content would make these coals premium fuel were it not for their high moisture levels, which range from around 25% w/w to over 60% w/w. High moisture creates a mistaken perception, among major coal importers, of inferior quality, and the many positive features of LRCs are overlooked
Low-rank coal research. Quarterly report, January--March 1990
Energy Technology Data Exchange (ETDEWEB)
1990-08-01
This document contains several quarterly progress reports for low-rank coal research that was performed from January-March 1990. Reports in Control Technology and Coal Preparation Research are in Flue Gas Cleanup, Waste Management, and Regional Energy Policy Program for the Northern Great Plains. Reports in Advanced Research and Technology Development are presented in Turbine Combustion Phenomena, Combustion Inorganic Transformation (two sections), Liquefaction Reactivity of Low-Rank Coals, Gasification Ash and Slag Characterization, and Coal Science. Reports in Combustion Research cover Fluidized-Bed Combustion, Beneficiation of Low-Rank Coals, Combustion Characterization of Low-Rank Coal Fuels, Diesel Utilization of Low-Rank Coals, and Produce and Characterize HWD (hot-water drying) Fuels for Heat Engine Applications. Liquefaction Research is reported in Low-Rank Coal Direct Liquefaction. Gasification Research progress is discussed for Production of Hydrogen and By-Products from Coal and for Chemistry of Sulfur Removal in Mild Gas.
Cheng, Jiubing; Alkhalifah, Tariq Ali; Wu, Zedong; Zou, Peng; Wang, Chenlong
2016-01-01
In elastic imaging, the extrapolated vector fields are decoupled into pure wave modes, such that the imaging condition produces interpretable images. Conventionally, mode decoupling in anisotropic media is costly because the operators involved are dependent on the velocity, and thus they are not stationary. We have developed an efficient pseudospectral approach to directly extrapolate the decoupled elastic waves using low-rank approximate mixed-domain integral operators on the basis of the elastic displacement wave equation. We have applied k-space adjustment to the pseudospectral solution to allow for a relatively large extrapolation time step. The low-rank approximation was, thus, applied to the spectral operators that simultaneously extrapolate and decompose the elastic wavefields. Synthetic examples on transversely isotropic and orthorhombic models showed that our approach has the potential to efficiently and accurately simulate the propagations of the decoupled quasi-P and quasi-S modes as well as the total wavefields for elastic wave modeling, imaging, and inversion.
Cheng, Jiubing
2016-03-15
In elastic imaging, the extrapolated vector fields are decoupled into pure wave modes, such that the imaging condition produces interpretable images. Conventionally, mode decoupling in anisotropic media is costly because the operators involved are dependent on the velocity, and thus they are not stationary. We have developed an efficient pseudospectral approach to directly extrapolate the decoupled elastic waves using low-rank approximate mixed-domain integral operators on the basis of the elastic displacement wave equation. We have applied k-space adjustment to the pseudospectral solution to allow for a relatively large extrapolation time step. The low-rank approximation was, thus, applied to the spectral operators that simultaneously extrapolate and decompose the elastic wavefields. Synthetic examples on transversely isotropic and orthorhombic models showed that our approach has the potential to efficiently and accurately simulate the propagations of the decoupled quasi-P and quasi-S modes as well as the total wavefields for elastic wave modeling, imaging, and inversion.
Hyperspectral Super-Resolution of Locally Low Rank Images From Complementary Multisource Data.
Veganzones, Miguel A; Simoes, Miguel; Licciardi, Giorgio; Yokoya, Naoto; Bioucas-Dias, Jose M; Chanussot, Jocelyn
2016-01-01
Remote sensing hyperspectral images (HSIs) are quite often low rank, in the sense that the data belong to a low dimensional subspace/manifold. This has been recently exploited for the fusion of low spatial resolution HSI with high spatial resolution multispectral images in order to obtain super-resolution HSI. Most approaches adopt an unmixing or a matrix factorization perspective. The derived methods have led to state-of-the-art results when the spectral information lies in a low-dimensional subspace/manifold. However, if the subspace/manifold dimensionality spanned by the complete data set is large, i.e., larger than the number of multispectral bands, the performance of these methods mainly decreases because the underlying sparse regression problem is severely ill-posed. In this paper, we propose a local approach to cope with this difficulty. Fundamentally, we exploit the fact that real world HSIs are locally low rank, that is, pixels acquired from a given spatial neighborhood span a very low-dimensional subspace/manifold, i.e., lower or equal than the number of multispectral bands. Thus, we propose to partition the image into patches and solve the data fusion problem independently for each patch. This way, in each patch the subspace/manifold dimensionality is low enough, such that the problem is not ill-posed anymore. We propose two alternative approaches to define the hyperspectral super-resolution through local dictionary learning using endmember induction algorithms. We also explore two alternatives to define the local regions, using sliding windows and binary partition trees. The effectiveness of the proposed approaches is illustrated with synthetic and semi real data.
Producing accurate wave propagation time histories using the global matrix method
International Nuclear Information System (INIS)
Obenchain, Matthew B; Cesnik, Carlos E S
2013-01-01
This paper presents a reliable method for producing accurate displacement time histories for wave propagation in laminated plates using the global matrix method. The existence of inward and outward propagating waves in the general solution is highlighted while examining the axisymmetric case of a circular actuator on an aluminum plate. Problems with previous attempts to isolate the outward wave for anisotropic laminates are shown. The updated method develops a correction signal that can be added to the original time history solution to cancel the inward wave and leave only the outward propagating wave. The paper demonstrates the effectiveness of the new method for circular and square actuators bonded to the surface of isotropic laminates, and these results are compared with exact solutions. Results for circular actuators on cross-ply laminates are also presented and compared with experimental results, showing the ability of the new method to successfully capture the displacement time histories for composite laminates. (paper)
Wang, Yang; Wu, Lin
2018-07-01
Low-Rank Representation (LRR) is arguably one of the most powerful paradigms for Multi-view spectral clustering, which elegantly encodes the multi-view local graph/manifold structures into an intrinsic low-rank self-expressive data similarity embedded in high-dimensional space, to yield a better graph partition than their single-view counterparts. In this paper we revisit it with a fundamentally different perspective by discovering LRR as essentially a latent clustered orthogonal projection based representation winged with an optimized local graph structure for spectral clustering; each column of the representation is fundamentally a cluster basis orthogonal to others to indicate its members, which intuitively projects the view-specific feature representation to be the one spanned by all orthogonal basis to characterize the cluster structures. Upon this finding, we propose our technique with the following: (1) We decompose LRR into latent clustered orthogonal representation via low-rank matrix factorization, to encode the more flexible cluster structures than LRR over primal data objects; (2) We convert the problem of LRR into that of simultaneously learning orthogonal clustered representation and optimized local graph structure for each view; (3) The learned orthogonal clustered representations and local graph structures enjoy the same magnitude for multi-view, so that the ideal multi-view consensus can be readily achieved. The experiments over multi-view datasets validate its superiority, especially over recent state-of-the-art LRR models. Copyright © 2018 Elsevier Ltd. All rights reserved.
Exponential Family Functional data analysis via a low-rank model.
Li, Gen; Huang, Jianhua Z; Shen, Haipeng
2018-05-08
In many applications, non-Gaussian data such as binary or count are observed over a continuous domain and there exists a smooth underlying structure for describing such data. We develop a new functional data method to deal with this kind of data when the data are regularly spaced on the continuous domain. Our method, referred to as Exponential Family Functional Principal Component Analysis (EFPCA), assumes the data are generated from an exponential family distribution, and the matrix of the canonical parameters has a low-rank structure. The proposed method flexibly accommodates not only the standard one-way functional data, but also two-way (or bivariate) functional data. In addition, we introduce a new cross validation method for estimating the latent rank of a generalized data matrix. We demonstrate the efficacy of the proposed methods using a comprehensive simulation study. The proposed method is also applied to a real application of the UK mortality study, where data are binomially distributed and two-way functional across age groups and calendar years. The results offer novel insights into the underlying mortality pattern. © 2018, The International Biometric Society.
Efficient Tensor Completion for Color Image and Video Recovery: Low-Rank Tensor Train.
Bengua, Johann A; Phien, Ho N; Tuan, Hoang Duong; Do, Minh N
2017-05-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via TT (SiLRTC-TT) is intimately related to minimizing a nuclear norm based on TT rank. The second one is from a multilinear matrix factorization model to approximate the TT rank of a tensor, and is called tensor completion by parallel matrix factorization via TT (TMac-TT). A tensor augmentation scheme of transforming a low-order tensor to higher orders is also proposed to enhance the effectiveness of SiLRTC-TT and TMac-TT. Simulation results for color image and video recovery show the clear advantage of our method over all other methods.
Synfuels from low-rank coals at the Great Plains Gasification Plant
International Nuclear Information System (INIS)
Pollock, D.
1992-01-01
This presentation focuses on the use of low rank coals to form synfuels. A worldwide abundance of low rank coals exists. Large deposits in the United States are located in Texas and North Dakota. Low rank coal deposits are also found in Europe, India and Australia. Because of the high moisture content of lignite ranging from 30% to 60% or higher, it is usually utilized in mine mouth applications. Lignite is generally very reactive and contains varying amounts of ash and sulfur. Typical uses for lignite are listed. A commercial application using lignite as feedstock to a synfuels plant, Dakota Gasification Company's Great Plains Gasification Plant, is discussed
Cho, JaeJin; Park, HyunWook
2018-05-17
To acquire interleaved bipolar data and reconstruct the full data using low-rank property for water fat separation. Bipolar acquisition suffers from issues related to gradient switching, the opposite gradient polarities, and other system imperfections, which prevent accurate water-fat separation. In this study, an interleaved bipolar acquisition scheme and a low-rank reconstruction method were proposed to reduce issues from the bipolar gradients while achieving a short imaging time. The proposed interleaved bipolar acquisition scheme collects echo-time signals from both gradient polarities; however, the sequence increases the imaging time. To reduce the imaging time, the signals were subsampled at every dimension of k-space. The low-rank property of the bipolar acquisition was defined and exploited to estimate the full data from the acquired subsampled data. To eliminate the bipolar issues, in the proposed method, the water-fat separation was performed separately for each gradient polarity, and the results for the positive and negative gradient polarities were combined after the water-fat separation. A phantom study and in-vivo experiments were conducted on a 3T Siemens Verio system. The results for the proposed method were compared with the results of the fully sampled interleaved bipolar acquisition and Soliman's method, which was the previous water-fat separation approach for reducing the issues of bipolar gradients and accelerating the interleaved bipolar acquisition. The proposed method provided accurate water and fat images without the issues of bipolar gradients and demonstrated a better performance compared with the results of the previous methods. The water-fat separation using the bipolar acquisition has several benefits including a short echo-spacing time. However, it suffers from bipolar-gradient issues such as strong gradient switching, system imperfection, and eddy current effects. This study demonstrated that accurate water-fat separated images can
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
Zhang, Zhendong
2017-12-17
The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyze the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artifacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration (RTM) applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modeling engine performs better than an isotropic migration.
Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation
Zhang, Zhendong; Liu, Yike; Alkhalifah, Tariq Ali; Wu, Zedong
2017-01-01
efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space
Low-rank coal research, Task 5.1. Topical report, April 1986--December 1992
Energy Technology Data Exchange (ETDEWEB)
1993-02-01
This document is a topical progress report for Low-Rank Coal Research performed April 1986 - December 1992. Control Technology and Coal Preparation Research is described for Flue Gas Cleanup, Waste Management, Regional Energy Policy Program for the Northern Great Plains, and Hot-Gas Cleanup. Advanced Research and Technology Development was conducted on Turbine Combustion Phenomena, Combustion Inorganic Transformation (two sections), Liquefaction Reactivity of Low-Rank Coals, Gasification Ash and Slag Characterization, and Coal Science. Combustion Research is described for Atmospheric Fluidized-Bed Combustion, Beneficiation of Low-Rank Coals, Combustion Characterization of Low-Rank Fuels (completed 10/31/90), Diesel Utilization of Low-Rank Coals (completed 12/31/90), Produce and Characterize HWD (hot-water drying) Fuels for Heat Engine Applications (completed 10/31/90), Nitrous Oxide Emission, and Pressurized Fluidized-Bed Combustion. Liquefaction Research in Low-Rank Coal Direct Liquefaction is discussed. Gasification Research was conducted in Production of Hydrogen and By-Products from Coals and in Sulfur Forms in Coal.
Clean utilization of low-rank coals for low-cost power generation
International Nuclear Information System (INIS)
Sondreal, E.A.
1992-01-01
Despite the unique utilization problems of low-rank coals, the ten US steam electric plants having the lowest operating cost in 1990 were all fueled on either lignite or subbituminous coal. Ash deposition problems, which have been a major barrier to sustaining high load on US boilers burning high-sodium low-rank coals, have been substantially reduced by improvements in coal selection, boiler design, on-line cleaning, operating conditions, and additives. Advantages of low-rank coals in advanced systems are their noncaking behavior when heated, their high reactivity allowing more complete reaction at lower temperatures, and the low sulfur content of selected deposits. The principal barrier issues are the high-temperature behavior of ash and volatile alkali derived from the coal-bound sodium found in some low-rank coals. Successful upgrading of low-rank coals requires that the product be both stable and suitable for end use in conventional and advanced systems. Coal-water fuel produced by hydrothermal processing of high-moisture low-rank coal meets these criteria, whereas most dry products from drying or carbonizing in hot gas tend to create dust and spontaneous ignition problems unless coated, agglomerated, briquetted, or afforded special handling
Low-rank coal study. Volume 4. Regulatory, environmental, and market analyses
Energy Technology Data Exchange (ETDEWEB)
1980-11-01
The regulatory, environmental, and market constraints to development of US low-rank coal resources are analyzed. Government-imposed environmental and regulatory requirements are among the most important factors that determine the markets for low-rank coal and the technology used in the extraction, delivery, and utilization systems. Both state and federal controls are examined, in light of available data on impacts and effluents associated with major low-rank coal development efforts. The market analysis examines both the penetration of existing markets by low-rank coal and the evolution of potential markets in the future. The electric utility industry consumes about 99 percent of the total low-rank coal production. This use in utility boilers rose dramatically in the 1970's and is expected to continue to grow rapidly. In the late 1980's and 1990's, industrial direct use of low-rank coal and the production of synthetic fuels are expected to start growing as major new markets.
Chiew, Mark; Graedel, Nadine N; Miller, Karla L
2018-07-01
Recent developments in highly accelerated fMRI data acquisition have employed low-rank and/or sparsity constraints for image reconstruction, as an alternative to conventional, time-independent parallel imaging. When under-sampling factors are high or the signals of interest are low-variance, however, functional data recovery can be poor or incomplete. We introduce a method for improving reconstruction fidelity using external constraints, like an experimental design matrix, to partially orient the estimated fMRI temporal subspace. Combining these external constraints with low-rank constraints introduces a new image reconstruction model that is analogous to using a mixture of subspace-decomposition (PCA/ICA) and regression (GLM) models in fMRI analysis. We show that this approach improves fMRI reconstruction quality in simulations and experimental data, focusing on the model problem of detecting subtle 1-s latency shifts between brain regions in a block-design task-fMRI experiment. Successful latency discrimination is shown at acceleration factors up to R = 16 in a radial-Cartesian acquisition. We show that this approach works with approximate, or not perfectly informative constraints, where the derived benefit is commensurate with the information content contained in the constraints. The proposed method extends low-rank approximation methods for under-sampled fMRI data acquisition by leveraging knowledge of expected task-based variance in the data, enabling improvements in the speed and efficiency of fMRI data acquisition without the loss of subtle features. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.
Low-ranking female Japanese macaques make efforts for social grooming.
Kurihara, Yosuke
2016-04-01
Grooming is essential to build social relationships in primates. Its importance is universal among animals from different ranks; however, rank-related differences in feeding patterns can lead to conflicts between feeding and grooming in low-ranking animals. Unifying the effects of dominance rank on feeding and grooming behaviors contributes to revealing the importance of grooming. Here, I tested whether the grooming behavior of low-ranking females were similar to that of high-ranking females despite differences in their feeding patterns. I followed 9 Japanese macaques Macaca fuscata fuscata adult females from the Arashiyama group, and analyzed the feeding patterns and grooming behaviors of low- and high-ranking females. Low-ranking females fed on natural foods away from the provisioning site, whereas high-ranking females obtained more provisioned food at the site. Due to these differences in feeding patterns, low-ranking females spent less time grooming than high-ranking females. However, both low- and high-ranking females performed grooming around the provisioning site, which was linked to the number of neighboring individuals for low-ranking females and feeding on provisioned foods at the site for high-ranking females. The similarity in grooming area led to a range and diversity of grooming partners that did not differ with rank. Thus, low-ranking females can obtain small amounts of provisioned foods and perform grooming with as many partners around the provisioning site as high-ranking females. These results highlight the efforts made by low-ranking females to perform grooming and suggest the importance of grooming behavior in group-living primates.
Low-rank coal study : national needs for resource development. Volume 2. Resource characterization
Energy Technology Data Exchange (ETDEWEB)
1980-11-01
Comprehensive data are presented on the quantity, quality, and distribution of low-rank coal (subbituminous and lignite) deposits in the United States. The major lignite-bearing areas are the Fort Union Region and the Gulf Lignite Region, with the predominant strippable reserves being in the states of North Dakota, Montana, and Texas. The largest subbituminous coal deposits are in the Powder River Region of Montana and Wyoming, The San Juan Basin of New Mexico, and in Northern Alaska. For each of the low-rank coal-bearing regions, descriptions are provided of the geology; strippable reserves; active and planned mines; classification of identified resources by depth, seam thickness, sulfur content, and ash content; overburden characteristics; aquifers; and coal properties and characteristics. Low-rank coals are distinguished from bituminous coals by unique chemical and physical properties that affect their behavior in extraction, utilization, or conversion processes. The most characteristic properties of the organic fraction of low-rank coals are the high inherent moisture and oxygen contents, and the correspondingly low heating value. Mineral matter (ash) contents and compositions of all coals are highly variable; however, low-rank coals tend to have a higher proportion of the alkali components CaO, MgO, and Na/sub 2/O. About 90% of the reserve base of US low-rank coal has less than one percent sulfur. Water resources in the major low-rank coal-bearing regions tend to have highly seasonal availabilities. Some areas appear to have ample water resources to support major new coal projects; in other areas such as Texas, water supplies may be constraining factor on development.
Low-ranking female Japanese macaques make efforts for social grooming
Kurihara, Yosuke
2016-01-01
Abstract Grooming is essential to build social relationships in primates. Its importance is universal among animals from different ranks; however, rank-related differences in feeding patterns can lead to conflicts between feeding and grooming in low-ranking animals. Unifying the effects of dominance rank on feeding and grooming behaviors contributes to revealing the importance of grooming. Here, I tested whether the grooming behavior of low-ranking females were similar to that of high-ranking females despite differences in their feeding patterns. I followed 9 Japanese macaques Macaca fuscata fuscata adult females from the Arashiyama group, and analyzed the feeding patterns and grooming behaviors of low- and high-ranking females. Low-ranking females fed on natural foods away from the provisioning site, whereas high-ranking females obtained more provisioned food at the site. Due to these differences in feeding patterns, low-ranking females spent less time grooming than high-ranking females. However, both low- and high-ranking females performed grooming around the provisioning site, which was linked to the number of neighboring individuals for low-ranking females and feeding on provisioned foods at the site for high-ranking females. The similarity in grooming area led to a range and diversity of grooming partners that did not differ with rank. Thus, low-ranking females can obtain small amounts of provisioned foods and perform grooming with as many partners around the provisioning site as high-ranking females. These results highlight the efforts made by low-ranking females to perform grooming and suggest the importance of grooming behavior in group-living primates. PMID:29491896
Compressed Sensing and Low-Rank Matrix Decomposition in Multisource Images Fusion
Directory of Open Access Journals (Sweden)
Kan Ren
2014-01-01
Full Text Available We propose a novel super-resolution multisource images fusion scheme via compressive sensing and dictionary learning theory. Under the sparsity prior of images patches and the framework of the compressive sensing theory, the multisource images fusion is reduced to a signal recovery problem from the compressive measurements. Then, a set of multiscale dictionaries are learned from several groups of high-resolution sample image’s patches via a nonlinear optimization algorithm. Moreover, a new linear weights fusion rule is proposed to obtain the high-resolution image. Some experiments are taken to investigate the performance of our proposed method, and the results prove its superiority to its counterparts.
On predicting student performance using low-rank matrix factorization techniques
DEFF Research Database (Denmark)
Lorenzen, Stephan Sloth; Pham, Dang Ninh; Alstrup, Stephen
2017-01-01
that require remedial support, generate adaptive hints, and improve the learning of students. This work focuses on predicting the score of students in the quiz system of the Clio Online learning platform, the largest Danish supplier of online learning materials, covering 90% of Danish elementary schools....... Experimental results in the Clio Online data set confirm that the proposed initialization methods lead to very fast convergence. Regarding the prediction accuracy, surprisingly, the advanced EM method is just slightly better than the baseline approach based on the global mean score and student/quiz bias...
CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition
Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe
2013-01-01
Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764
A method for accurate computation of elastic and discrete inelastic scattering transfer matrix
International Nuclear Information System (INIS)
Garcia, R.D.M.; Santina, M.D.
1986-05-01
A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt
Low-rank coal study: national needs for resource development. Volume 3. Technology evaluation
Energy Technology Data Exchange (ETDEWEB)
1980-11-01
Technologies applicable to the development and use of low-rank coals are analyzed in order to identify specific needs for research, development, and demonstration (RD and D). Major sections of the report address the following technologies: extraction; transportation; preparation, handling and storage; conventional combustion and environmental control technology; gasification; liquefaction; and pyrolysis. Each of these sections contains an introduction and summary of the key issues with regard to subbituminous coal and lignite; description of all relevant technology, both existing and under development; a description of related environmental control technology; an evaluation of the effects of low-rank coal properties on the technology; and summaries of current commercial status of the technology and/or current RD and D projects relevant to low-rank coals.
High-dimensional statistical inference: From vector to matrix
Zhang, Anru
estimator is easy to implement via convex programming and performs well numerically. The techniques and main results developed in the chapter also have implications to other related statistical problems. An application to estimation of spiked covariance matrices from one-dimensional random projections is considered. The results demonstrate that it is still possible to accurately estimate the covariance matrix of a high-dimensional distribution based only on one-dimensional projections. For the third part of the thesis, we consider another setting of low-rank matrix completion. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Ma, Xu; Cheng, Yongmei; Hao, Shuai
2016-12-10
Automatic classification of terrain surfaces from an aerial image is essential for an autonomous unmanned aerial vehicle (UAV) landing at an unprepared site by using vision. Diverse terrain surfaces may show similar spectral properties due to the illumination and noise that easily cause poor classification performance. To address this issue, a multi-stage classification algorithm based on low-rank recovery and multi-feature fusion sparse representation is proposed. First, color moments and Gabor texture feature are extracted from training data and stacked as column vectors of a dictionary. Then we perform low-rank matrix recovery for the dictionary by using augmented Lagrange multipliers and construct a multi-stage terrain classifier. Experimental results on an aerial map database that we prepared verify the classification accuracy and robustness of the proposed method.
Robust Visual Tracking via Online Discriminative and Low-Rank Dictionary Learning.
Zhou, Tao; Liu, Fanghui; Bhaskar, Harish; Yang, Jie
2017-09-12
In this paper, we propose a novel and robust tracking framework based on online discriminative and low-rank dictionary learning. The primary aim of this paper is to obtain compact and low-rank dictionaries that can provide good discriminative representations of both target and background. We accomplish this by exploiting the recovery ability of low-rank matrices. That is if we assume that the data from the same class are linearly correlated, then the corresponding basis vectors learned from the training set of each class shall render the dictionary to become approximately low-rank. The proposed dictionary learning technique incorporates a reconstruction error that improves the reliability of classification. Also, a multiconstraint objective function is designed to enable active learning of a discriminative and robust dictionary. Further, an optimal solution is obtained by iteratively computing the dictionary, coefficients, and by simultaneously learning the classifier parameters. Finally, a simple yet effective likelihood function is implemented to estimate the optimal state of the target during tracking. Moreover, to make the dictionary adaptive to the variations of the target and background during tracking, an online update criterion is employed while learning the new dictionary. Experimental results on a publicly available benchmark dataset have demonstrated that the proposed tracking algorithm performs better than other state-of-the-art trackers.
Litvinenko, Alexander
2018-03-12
Part 1: Parallel H-matrices in spatial statistics 1. Motivation: improve statistical model 2. Tools: Hierarchical matrices 3. Matern covariance function and joint Gaussian likelihood 4. Identification of unknown parameters via maximizing Gaussian log-likelihood 5. Implementation with HLIBPro. Part 2: Low-rank Tucker tensor methods in spatial statistics
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander; Nowak, Wolfgang
2014-01-01
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1
Low-rank coal research: Volume 2, Advanced research and technology development: Final report
Energy Technology Data Exchange (ETDEWEB)
Mann, M.D.; Swanson, M.L.; Benson, S.A.; Radonovich, L.; Steadman, E.N.; Sweeny, P.G.; McCollor, D.P.; Kleesattel, D.; Grow, D.; Falcone, S.K.
1987-04-01
Volume II contains articles on advanced combustion phenomena, combustion inorganic transformation; coal/char reactivity; liquefaction reactivity of low-rank coals, gasification ash and slag characterization, and fine particulate emissions. These articles have been entered individually into EDB and ERA. (LTN)
Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations
Giraldi, Loic; Nouy, Anthony
2017-01-01
This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.
Weakly intrusive low-rank approximation method for nonlinear parameter-dependent equations
Giraldi, Loic
2017-06-30
This paper presents a weakly intrusive strategy for computing a low-rank approximation of the solution of a system of nonlinear parameter-dependent equations. The proposed strategy relies on a Newton-like iterative solver which only requires evaluations of the residual of the parameter-dependent equation and of a preconditioner (such as the differential of the residual) for instances of the parameters independently. The algorithm provides an approximation of the set of solutions associated with a possibly large number of instances of the parameters, with a computational complexity which can be orders of magnitude lower than when using the same Newton-like solver for all instances of the parameters. The reduction of complexity requires efficient strategies for obtaining low-rank approximations of the residual, of the preconditioner, and of the increment at each iteration of the algorithm. For the approximation of the residual and the preconditioner, weakly intrusive variants of the empirical interpolation method are introduced, which require evaluations of entries of the residual and the preconditioner. Then, an approximation of the increment is obtained by using a greedy algorithm for low-rank approximation, and a low-rank approximation of the iterate is finally obtained by using a truncated singular value decomposition. When the preconditioner is the differential of the residual, the proposed algorithm is interpreted as an inexact Newton solver for which a detailed convergence analysis is provided. Numerical examples illustrate the efficiency of the method.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander
2014-05-04
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1
Accurate 3-D Profile Extraction of Skull Bone Using an Ultrasound Matrix Array.
Hajian, Mehdi; Gaspar, Robert; Maev, Roman Gr
2017-12-01
The present study investigates the feasibility, accuracy, and precision of 3-D profile extraction of the human skull bone using a custom-designed ultrasound matrix transducer in Pulse-Echo. Due to the attenuative scattering properties of the skull, the backscattered echoes from the inner surface of the skull are severely degraded, attenuated, and at some points overlapped. Furthermore, the speed of sound (SOS) in the skull varies significantly in different zones and also from case to case; if considered constant, it introduces significant error to the profile measurement. A new method for simultaneous estimation of the skull profiles and the sound speed value is presented. The proposed method is a two-folded procedure: first, the arrival times of the backscattered echoes from the skull bone are estimated using multi-lag phase delay (MLPD) and modified space alternating generalized expectation maximization (SAGE) algorithms. Next, these arrival times are fed into an adaptive sound speed estimation algorithm to compute the optimal SOS value and subsequently, the skull bone thickness. For quantitative evaluation, the estimated bone phantom thicknesses were compared with the mechanical measurements. The accuracies of the bone thickness measurements using MLPD and modified SAGE algorithms combined with the adaptive SOS estimation were 7.93% and 4.21%, respectively. These values were 14.44% and 10.75% for the autocorrelation and cross-correlation methods. Additionally, the Bland-Altman plots showed the modified SAGE outperformed the other methods with -0.35 and 0.44 mm limits of agreement. No systematic error that could be related to the skull bone thickness was observed for this method.
International Nuclear Information System (INIS)
Kim, Kyungsang; Ye, Jong Chul; Son, Young Don; Cho, Zang Hee; Bresler, Yoram; Ra, Jong Beom
2015-01-01
Dynamic positron emission tomography (PET) is widely used to measure changes in the bio-distribution of radiopharmaceuticals within particular organs of interest over time. However, to retain sufficient temporal resolution, the number of photon counts in each time frame must be limited. Therefore, conventional reconstruction algorithms such as the ordered subset expectation maximization (OSEM) produce noisy reconstruction images, thus degrading the quality of the extracted time activity curves (TACs). To address this issue, many advanced reconstruction algorithms have been developed using various spatio-temporal regularizations. In this paper, we extend earlier results and develop a novel temporal regularization, which exploits the self-similarity of patches that are collected in dynamic images. The main contribution of this paper is to demonstrate that the correlation of patches can be exploited using a low-rank constraint that is insensitive to global intensity variations. The resulting optimization framework is, however, non-Lipschitz and non-convex due to the Poisson log-likelihood and low-rank penalty terms. Direct application of the conventional Poisson image deconvolution by an augmented Lagrangian (PIDAL) algorithm is, however, problematic due to its large memory requirements, which prevents its parallelization. Thus, we propose a novel optimization framework using the concave-convex procedure (CCCP) by exploiting the Legendre–Fenchel transform, which is computationally efficient and parallelizable. In computer simulation and a real in vivo experiment using a high-resolution research tomograph (HRRT) scanner, we confirm that the proposed algorithm can improve image quality while also extracting more accurate region of interests (ROI) based kinetic parameters. Furthermore, we show that the total reconstruction time for HRRT PET is significantly accelerated using our GPU implementation, which makes the algorithm very practical in clinical environments
Directory of Open Access Journals (Sweden)
Fan Meng
Full Text Available This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the l(1-norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image.
Low temperature oxidation and spontaneous combustion characteristics of upgraded low rank coal
Energy Technology Data Exchange (ETDEWEB)
Choi, H.K.; Kim, S.D.; Yoo, J.H.; Chun, D.H.; Rhim, Y.J.; Lee, S.H. [Korea Institute of Energy Research, Daejeon (Korea, Republic of)
2013-07-01
The low temperature oxidation and spontaneous combustion characteristics of dried coal produced from low rank coal using the upgraded brown coal (UBC) process were investigated. To this end, proximate properties, crossing-point temperature (CPT), and isothermal oxidation characteristics of the coal were analyzed. The isothermal oxidation characteristics were estimated by considering the formation rates of CO and CO{sub 2} at low temperatures. The upgraded low rank coal had higher heating values than the raw coal. It also had less susceptibility to low temperature oxidation and spontaneous combustion. This seemed to result from the coating of the asphalt on the surface of the coal, which suppressed the active functional groups from reacting with oxygen in the air. The increasing upgrading pressure negatively affected the low temperature oxidation and spontaneous combustion.
A New Direction of Cancer Classification: Positive Effect of Low-Ranking MicroRNAs.
Li, Feifei; Piao, Minghao; Piao, Yongjun; Li, Meijing; Ryu, Keun Ho
2014-10-01
Many studies based on microRNA (miRNA) expression profiles showed a new aspect of cancer classification. Because one characteristic of miRNA expression data is the high dimensionality, feature selection methods have been used to facilitate dimensionality reduction. The feature selection methods have one shortcoming thus far: they just consider the problem of where feature to class is 1:1 or n:1. However, because one miRNA may influence more than one type of cancer, human miRNA is considered to be ranked low in traditional feature selection methods and are removed most of the time. In view of the limitation of the miRNA number, low-ranking miRNAs are also important to cancer classification. We considered both high- and low-ranking features to cover all problems (1:1, n:1, 1:n, and m:n) in cancer classification. First, we used the correlation-based feature selection method to select the high-ranking miRNAs, and chose the support vector machine, Bayes network, decision tree, k-nearest-neighbor, and logistic classifier to construct cancer classification. Then, we chose Chi-square test, information gain, gain ratio, and Pearson's correlation feature selection methods to build the m:n feature subset, and used the selected miRNAs to determine cancer classification. The low-ranking miRNA expression profiles achieved higher classification accuracy compared with just using high-ranking miRNAs in traditional feature selection methods. Our results demonstrate that the m:n feature subset made a positive impression of low-ranking miRNAs in cancer classification.
Efficient tensor completion for color image and video recovery: Low-rank tensor train
Bengua, Johann A.; Phien, Ho N.; Tuan, Hoang D.; Do, Minh N.
2016-01-01
This paper proposes a novel approach to tensor completion, which recovers missing entries of data represented by tensors. The approach is based on the tensor train (TT) rank, which is able to capture hidden information from tensors thanks to its definition from a well-balanced matricization scheme. Accordingly, new optimization formulations for tensor completion are proposed as well as two new algorithms for their solution. The first one called simple low-rank tensor completion via tensor tra...
Reweighted Low-Rank Tensor Completion and its Applications in Video Recovery
M., Baburaj; George, Sudhish N.
2016-01-01
This paper focus on recovering multi-dimensional data called tensor from randomly corrupted incomplete observation. Inspired by reweighted $l_1$ norm minimization for sparsity enhancement, this paper proposes a reweighted singular value enhancement scheme to improve tensor low tubular rank in the tensor completion process. An efficient iterative decomposition scheme based on t-SVD is proposed which improves low-rank signal recovery significantly. The effectiveness of the proposed method is es...
The application of low-rank and sparse decomposition method in the field of climatology
Gupta, Nitika; Bhaskaran, Prasad K.
2018-04-01
The present study reports a low-rank and sparse decomposition method that separates the mean and the variability of a climate data field. Until now, the application of this technique was limited only in areas such as image processing, web data ranking, and bioinformatics data analysis. In climate science, this method exactly separates the original data into a set of low-rank and sparse components, wherein the low-rank components depict the linearly correlated dataset (expected or mean behavior), and the sparse component represents the variation or perturbation in the dataset from its mean behavior. The study attempts to verify the efficacy of this proposed technique in the field of climatology with two examples of real world. The first example attempts this technique on the maximum wind-speed (MWS) data for the Indian Ocean (IO) region. The study brings to light a decadal reversal pattern in the MWS for the North Indian Ocean (NIO) during the months of June, July, and August (JJA). The second example deals with the sea surface temperature (SST) data for the Bay of Bengal region that exhibits a distinct pattern in the sparse component. The study highlights the importance of the proposed technique used for interpretation and visualization of climate data.
Low rank approximation methods for MR fingerprinting with large scale dictionaries.
Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra
2018-04-01
This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Task 27 -- Alaskan low-rank coal-water fuel demonstration project
Energy Technology Data Exchange (ETDEWEB)
NONE
1995-10-01
Development of coal-water-fuel (CWF) technology has to-date been predicated on the use of high-rank bituminous coal only, and until now the high inherent moisture content of low-rank coal has precluded its use for CWF production. The unique feature of the Alaskan project is the integration of hot-water-drying (HWD) into CWF technology as a beneficiation process. Hot-water-drying is an EERC developed technology unavailable to the competition that allows the range of CWF feedstock to be extended to low-rank coals. The primary objective of the Alaskan Project, is to promote interest in the CWF marketplace by demonstrating the commercial viability of low-rank coal-water-fuel (LRCWF). While commercialization plans cannot be finalized until the implementation and results of the Alaskan LRCWF Project are known and evaluated, this report has been prepared to specifically address issues concerning business objectives for the project, and outline a market development plan for meeting those objectives.
Application of House of Quality in evaluation of low rank coal pyrolysis polygeneration technologies
International Nuclear Information System (INIS)
Yang, Qingchun; Yang, Siyu; Qian, Yu; Kraslawski, Andrzej
2015-01-01
Highlights: • House of Quality method was used for assessment of coal pyrolysis polygeneration technologies. • Low rank coal pyrolysis polygeneration processes based on solid heat carrier, moving bed and fluidized bed were evaluated. • Technical and environmental criteria for the assessment of technologies were used. • Low rank coal pyrolysis polygeneration process based on a fluidized bed is the best option. - Abstract: Increasing interest in low rank coal pyrolysis (LRCP) polygeneration has resulted in the development of a number of different technologies and approaches. Evaluation of LRCP processes should include not only conventional efficiency, economic and environmental assessments, but also take into consideration sustainability aspects. As a result of the many complex variables involved, selection of the most suitable LRCP technology becomes a challenging task. This paper applies a House of Quality method in comprehensive evaluation of LRCP. A multi-level evaluation model addressing 19 customer needs and analyzing 10 technical characteristics is developed. Using the evaluation model, the paper evaluates three LRCP technologies, which are based on solid heat carrier, moving bed and fluidized bed concepts, respectively. The results show that the three most important customer needs are level of technical maturity, wastewater emissions, and internal rate of return. The three most important technical characteristics are production costs, investment costs and waste emissions. On the basis of the conducted analysis, it is concluded that the LRCP process utilizing a fluidized bed approach is the optimal alternative studied
Directory of Open Access Journals (Sweden)
Hongyang Lu
2016-06-01
Full Text Available Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1 we propose a nonconvex low-rank approximation for reconstructing RSI; (2 we inject reference prior information to overcome over smoothed edges and texture detail losses; (3 on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.
OCT despeckling via weighted nuclear norm constrained non-local low-rank representation
Tang, Chang; Zheng, Xiao; Cao, Lijuan
2017-10-01
As a non-invasive imaging modality, optical coherence tomography (OCT) plays an important role in medical sciences. However, OCT images are always corrupted by speckle noise, which can mask image features and pose significant challenges for medical analysis. In this work, we propose an OCT despeckling method by using non-local, low-rank representation with weighted nuclear norm constraint. Unlike previous non-local low-rank representation based OCT despeckling methods, we first generate a guidance image to improve the non-local group patches selection quality, then a low-rank optimization model with a weighted nuclear norm constraint is formulated to process the selected group patches. The corrupted probability of each pixel is also integrated into the model as a weight to regularize the representation error term. Note that each single patch might belong to several groups, hence different estimates of each patch are aggregated to obtain its final despeckled result. Both qualitative and quantitative experimental results on real OCT images show the superior performance of the proposed method compared with other state-of-the-art speckle removal techniques.
El Gharamti, Mohamad; Hoteit, Ibrahim
2014-01-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
El Gharamti, Mohamad
2014-02-01
The accuracy of groundwater flow and transport model predictions highly depends on our knowledge of subsurface physical parameters. Assimilation of contaminant concentration data from shallow dug wells could help improving model behavior, eventually resulting in better forecasts. In this paper, we propose a joint state-parameter estimation scheme which efficiently integrates a low-rank extended Kalman filtering technique, namely the Singular Evolutive Extended Kalman (SEEK) filter, with the prominent complex-step method (CSM). The SEEK filter avoids the prohibitive computational burden of the Extended Kalman filter by updating the forecast along the directions of error growth only, called filter correction directions. CSM is used within the SEEK filter to efficiently compute model derivatives with respect to the state and parameters along the filter correction directions. CSM is derived using complex Taylor expansion and is second order accurate. It is proven to guarantee accurate gradient computations with zero numerical round-off errors, but requires complexifying the numerical code. We perform twin-experiments to test the performance of the CSM-based SEEK for estimating the state and parameters of a subsurface contaminant transport model. We compare the efficiency and the accuracy of the proposed scheme with two standard finite difference-based SEEK filters as well as with the ensemble Kalman filter (EnKF). Assimilation results suggest that the use of the CSM in the context of the SEEK filter may provide up to 80% more accurate solutions when compared to standard finite difference schemes and is competitive with the EnKF, even providing more accurate results in certain situations. We analyze the results based on two different observation strategies. We also discuss the complexification of the numerical code and show that this could be efficiently implemented in the context of subsurface flow models. © 2013 Elsevier B.V.
The role of IGCC technology in power generation using low-rank coal
Energy Technology Data Exchange (ETDEWEB)
Juangjandee, Pipat
2010-09-15
Based on basic test results on the gasification rate of Mae Moh lignite coal. It was found that an IDGCC power plant is the most suitable for Mae Moh lignite. In conclusion, the future of an IDGCC power plant using low-rank coal in Mae Moh mine would hinge on the strictness of future air pollution control regulations including green-house gas emission and the constraint of Thailand's foreign currency reserves needed to import fuels, in addition to economic consideration. If and when it is necessary to overcome these obstacles, IGCC is one variable alternative power generation must consider.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander; Nowak, Wolfgang
2014-01-01
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander; Nowak, Wolfgang
2014-01-01
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.
Low-rank Quasi-Newton updates for Robust Jacobian lagging in Newton methods
International Nuclear Information System (INIS)
Brown, J.; Brune, P.
2013-01-01
Newton-Krylov methods are standard tools for solving nonlinear problems. A common approach is to 'lag' the Jacobian when assembly or preconditioner setup is computationally expensive, in exchange for some degradation in the convergence rate and robustness. We show that this degradation may be partially mitigated by using the lagged Jacobian as an initial operator in a quasi-Newton method, which applies unassembled low-rank updates to the Jacobian until the next full reassembly. We demonstrate the effectiveness of this technique on problems in glaciology and elasticity. (authors)
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander
2014-01-08
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 1e+8, problem sizes 1.5e+13 and 2e+15 estimation points for Kriging and spatial design.
Kriging accelerated by orders of magnitude: combining low-rank with FFT techniques
Litvinenko, Alexander
2014-01-06
Kriging algorithms based on FFT, the separability of certain covariance functions and low-rank representations of covariance functions have been investigated. The current study combines these ideas, and so combines the individual speedup factors of all ideas. The reduced computational complexity is O(dLlogL), where L := max ini, i = 1..d. For separable covariance functions, the results are exact, and non-separable covariance functions can be approximated through sums of separable components. Speedup factor is 10 8, problem sizes 15e + 12 and 2e + 15 estimation points for Kriging and spatial design.
Sampling and Low-Rank Tensor Approximation of the Response Surface
Litvinenko, Alexander; Matthies, Hermann Georg; El-Moselhy, Tarek A.
2013-01-01
Most (quasi)-Monte Carlo procedures can be seen as computing some integral over an often high-dimensional domain. If the integrand is expensive to evaluate-we are thinking of a stochastic PDE (SPDE) where the coefficients are random fields and the integrand is some functional of the PDE-solution-there is the desire to keep all the samples for possible later computations of similar integrals. This obviously means a lot of data. To keep the storage demands low, and to allow evaluation of the integrand at points which were not sampled, we construct a low-rank tensor approximation of the integrand over the whole integration domain. This can also be viewed as a representation in some problem-dependent basis which allows a sparse representation. What one obtains is sometimes called a "surrogate" or "proxy" model, or a "response surface". This representation is built step by step or sample by sample, and can already be used for each new sample. In case we are sampling a solution of an SPDE, this allows us to reduce the number of necessary samples, namely in case the solution is already well-represented by the low-rank tensor approximation. This can be easily checked by evaluating the residuum of the PDE with the approximate solution. The procedure will be demonstrated in the computation of a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. © Springer-Verlag Berlin Heidelberg 2013.
Simultaneous multislice magnetic resonance fingerprinting with low-rank and subspace modeling.
Bo Zhao; Bilgic, Berkin; Adalsteinsson, Elfar; Griswold, Mark A; Wald, Lawrence L; Setsompop, Kawin
2017-07-01
Magnetic resonance fingerprinting (MRF) is a new quantitative imaging paradigm that enables simultaneous acquisition of multiple magnetic resonance tissue parameters (e.g., T 1 , T 2 , and spin density). Recently, MRF has been integrated with simultaneous multislice (SMS) acquisitions to enable volumetric imaging with faster scan time. In this paper, we present a new image reconstruction method based on low-rank and subspace modeling for improved SMS-MRF. Here the low-rank model exploits strong spatiotemporal correlation among contrast-weighted images, while the subspace model captures the temporal evolution of magnetization dynamics. With the proposed model, the image reconstruction problem is formulated as a convex optimization problem, for which we develop an algorithm based on variable splitting and the alternating direction method of multipliers. The performance of the proposed method has been evaluated by numerical experiments, and the results demonstrate that the proposed method leads to improved accuracy over the conventional approach. Practically, the proposed method has a potential to allow for a 3× speedup with minimal reconstruction error, resulting in less than 5 sec imaging time per slice.
Promoting effect of various biomass ashes on the steam gasification of low-rank coal
International Nuclear Information System (INIS)
Rizkiana, Jenny; Guan, Guoqing; Widayatno, Wahyu Bambang; Hao, Xiaogang; Li, Xiumin; Huang, Wei; Abudula, Abuliti
2014-01-01
Highlights: • Biomass ash was utilized to promote gasification of low rank coal. • Promoting effect of biomass ash highly depended on AAEM content in the ash. • Stability of the ash could be improved by maintaining AAEM amount in the ash. • Different biomass ash could have completely different catalytic activity. - Abstract: Application of biomass ash as a catalyst to improve gasification rate is a promising way for the effective utilization of waste ash as well as for the reduction of cost. Investigation on the catalytic activity of biomass ash to the gasification of low rank coal was performed in details in the present study. Ashes from 3 kinds of biomass, i.e. brown seaweed/BS, eel grass/EG, and rice straw/RS, were separately mixed with coal sample and gasified in a fixed bed downdraft reactor using steam as the gasifying agent. BS and EG ashes enhanced the gas production rate greater than RS ash. Higher catalytic activity of BS or EG ash was mainly attributed to the higher content of alkali and alkaline earth metal (AAEM) and lower content of silica in it. Higher content of silica in the RS ash was identified to have inhibiting effect for the steam gasification of coal. Stable catalytic activity was remained when the amount of AAEM in the regenerated ash was maintained as that of the original one
Directory of Open Access Journals (Sweden)
Rajive Ganguli
2012-01-01
Full Text Available The impact of particle size distribution (PSD of pulverized, low rank high volatile content Alaska coal on combustion related power plant performance was studied in a series of field scale tests. Performance was gauged through efficiency (ratio of megawatt generated to energy consumed as coal, emissions (SO2, NOx, CO, and carbon content of ash (fly ash and bottom ash. The study revealed that the tested coal could be burned at a grind as coarse as 50% passing 76 microns, with no deleterious impact on power generation and emissions. The PSD’s tested in this study were in the range of 41 to 81 percent passing 76 microns. There was negligible correlation between PSD and the followings factors: efficiency, SO2, NOx, and CO. Additionally, two tests where stack mercury (Hg data was collected, did not demonstrate any real difference in Hg emissions with PSD. The results from the field tests positively impacts pulverized coal power plants that burn low rank high volatile content coals (such as Powder River Basin coal. These plants can potentially reduce in-plant load by grinding the coal less (without impacting plant performance on emissions and efficiency and thereby, increasing their marketability.
Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.
Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo
2017-07-01
Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.
Dutta, Aritra
2017-07-02
Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza; Validi, AbdoulAhad; Iaccarino, Gianluca
2013-01-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Zeng, Lang; He, Yu; Povolotskyi, Michael; Liu, XiaoYan; Klimeck, Gerhard; Kubis, Tillmann
2013-06-01
In this work, the low rank approximation concept is extended to the non-equilibrium Green's function (NEGF) method to achieve a very efficient approximated algorithm for coherent and incoherent electron transport. This new method is applied to inelastic transport in various semiconductor nanodevices. Detailed benchmarks with exact NEGF solutions show (1) a very good agreement between approximated and exact NEGF results, (2) a significant reduction of the required memory, and (3) a large reduction of the computational time (a factor of speed up as high as 150 times is observed). A non-recursive solution of the inelastic NEGF transport equations of a 1000 nm long resistor on standard hardware illustrates nicely the capability of this new method.
Non-intrusive low-rank separated approximation of high-dimensional stochastic models
Doostan, Alireza
2013-08-01
This work proposes a sampling-based (non-intrusive) approach within the context of low-. rank separated representations to tackle the issue of curse-of-dimensionality associated with the solution of models, e.g., PDEs/ODEs, with high-dimensional random inputs. Under some conditions discussed in details, the number of random realizations of the solution, required for a successful approximation, grows linearly with respect to the number of random inputs. The construction of the separated representation is achieved via a regularized alternating least-squares regression, together with an error indicator to estimate model parameters. The computational complexity of such a construction is quadratic in the number of random inputs. The performance of the method is investigated through its application to three numerical examples including two ODE problems with high-dimensional random inputs. © 2013 Elsevier B.V.
Dutta, Aritra; Li, Xin; Richtarik, Peter
2017-01-01
Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL.
Extracellular oxidases and the transformation of solubilised low-rank coal by wood-rot fungi
Energy Technology Data Exchange (ETDEWEB)
Ralph, J.P. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences; Graham, L.A. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences; Catcheside, D.E.A. [Flinders Univ. of South Australia, Bedford Park (Australia). School of Biological Sciences
1996-12-31
The involvement of extracellular oxidases in biotransformation of low-rank coal was assessed by correlating the ability of nine white-rot and brown-rot fungi to alter macromolecular material in alkali-solubilised brown coal with the spectrum of oxidases they produce when grown on low-nitrogen medium. The coal fraction used was that soluble at 3.0{<=}pH{<=}6.0 (SWC6 coal). In 15-ml cultures, Gloeophyllum trabeum, Lentinus lepideus and Trametes versicolor produced little or no lignin peroxidase, manganese (Mn) peroxidase or laccase activity and caused no change to SWC6 coal. Ganoderma applanatum and Pycnoporus cinnabarinus also produced no detectable lignin or Mn peroxidases or laccase yet increased the absorbance at 400 nm of SWC6 coal. G. applanatum, which produced veratryl alcohol oxidase, also increased the modal apparent molecular mass. SWC6 coal exposed to Merulius tremellosus and Perenniporia tephropora, which secreted Mn peroxidases and laccase and Phanerochaete chrysosporium, which produced Mn and lignin peroxidases was polymerised but had unchanged or decreased absorbance. In the case of both P. chrysosporium and M. tremellosus, polymerisation of SWC6 coal was most extensive, leading to the formation of a complex insoluble in 100 mM NaOH. Rigidoporus ulmarius, which produced only laccase, both polymerised and reduced the A{sub 400} of SWC6 coal. P. chrysosporium, M. tremellosus and P. tephropora grown in 10-ml cultures produced a spectrum of oxidases similar to that in 15-ml cultures but, in each case, caused more extensive loss of A{sub 400}, and P. chrysosporium depolymerised SWC6 coal. It is concluded that the extracellular oxidases of white-rot fungi can transform low-rank coal macromolecules and that increased oxygen availability in the shallower 10-ml cultures favours catabolism over polymerisation. (orig.)
International Nuclear Information System (INIS)
Ge, Lichao; Zhang, Yanwei; Wang, Zhihua; Zhou, Junhu; Cen, Kefa
2013-01-01
Highlights: • Typical Chinese lignites with various ranks are upgraded through microwave. • The pore distribution extends to micropore region, BET area and volume increase. • FTIR show the change of microstructure and improvement in coal rank after upgrading. • Upgraded coals exhibit weak combustion similar to Da Tong bituminous coal. • More evident effects are obtained for raw brown coal with relative lower rank. - Abstract: This study investigates the effects of microwave irradiation treatment on coal composition, pore structure, coal rank, function groups, and combustion characteristics of typical Chinese low-rank coals. Results showed that the upgrading process (microwave irradiation treatment) significantly reduced the coals’ inherent moisture, and increased their calorific value and fixed carbon content. It was also found that the upgrading process generated micropores and increased pore volume and surface area of the coals. Results on the oxygen/carbon ratio parameter indicated that the low-rank coals were upgraded to high-rank coals after the upgrading process, which is in agreement with the findings from Fourier transform infrared spectroscopy. Unstable components in the coal were converted into stable components during the upgrading process. Thermo-gravimetric analysis showed that the combustion processes of upgraded coals were delayed toward the high-temperature region, the ignition and burnout temperatures increased, and the comprehensive combustion parameter decreased. Compared with raw brown coals, the upgraded coals exhibited weak combustion characteristics similar to bituminous coal. The changes in physicochemical characteristics became more notable when processing temperature increased from 130 °C to 160 °C or the rank of raw brown coal was lower. Microwave irradiation treatment could be considered as an effective dewatering and upgrading process
Influence of the hydrothermal dewatering on the combustion characteristics of Chinese low-rank coals
International Nuclear Information System (INIS)
Ge, Lichao; Zhang, Yanwei; Xu, Chang; Wang, Zhihua; Zhou, Junhu; Cen, Kefa
2015-01-01
This study investigates the influence of hydrothermal dewatering performed at different temperatures on the combustion characteristics of Chinese low-rank coals with different coalification maturities. It was found that the upgrading process significantly decreased the inherent moisture and oxygen content, increased the calorific value and fixed carbon content, and promoted the damage of the hydrophilic oxygen functional groups. The results of oxygen/carbon atomic ratio indicated that the upgrading process converted the low-rank coals near to high-rank coals which can also be gained using the Fourier transform infrared spectroscopy. The thermogravimetric analysis showed that the combustion processes of upgraded coals were delayed toward the high temperature region, and the upgraded coals had higher ignition and burnout temperature. On the other hand, based on the higher average combustion rate and comprehensive combustion parameter, the upgraded coals performed better compared with raw brown coals and the Da Tong bituminous coal. In ignition segment, the activation energy increased after treatment but decreased in the combustion stage. The changes in coal compositions, microstructure, rank, and combustion characteristics were more notable as the temperature in hydrothermal dewatering increased from 250 to 300 °C or coals of lower ranks were used. - Highlights: • Typical Chinese lignites with various ranks are upgraded by hydrothermal dewatering. • Upgraded coals exhibit chemical compositions comparable with that of bituminous coal. • FTIR show the change of microstructure and improvement in coal rank after upgrading. • Upgraded coals exhibit difficulty in ignition but combust easily. • More evident effects are obtained for raw brown coal with relative lower rank.
Pra Desain Pabrik Substitute Natural Gas (SNG dari Low Rank Coal
Directory of Open Access Journals (Sweden)
Asti Permatasari
2014-09-01
rendah dan sedang yang sangat banyak, yaitu masing-masing sebesar 2.426,00 juta ton dan 186,00 juta ton. Maka dari itu, pabrik SNG dari low rank coal ini akan didirikan di Kecamatan Ilir Timur, Sumatera Selatan. Rencananya pabrik ini akan didirikan pada tahun 2016 dan siap beroperasi pada tahun 2018. Diperkirakan konsumsi gas alam pada tahun 2018 sebesar 906.599,3 MMSCF sehingga pendirian pabrik yang baru diharapkan dapat menggantikan kebutuhan gas alam sebesar 4% di Indonesia, yaitu sebanyak 36.295,502 MMSCF per tahun atau sebesar 109.986 MMSCFD. Proses pembuatan SNG dari low rank coal terdiri dari empat proses utama, yaitu coal preparation, gasifikasi, gas cleaning, dan metanasi. Dari analisa perhitungan ekonomi didapat Investasi 823.947.924 USD, IRR sebesar 13,12%, POT selama 5 tahun, dan BEP sebesar 68,55%.
Zhang, Du; Su, Neil Qiang; Yang, Weitao
2017-07-20
The GW self-energy, especially G 0 W 0 based on the particle-hole random phase approximation (phRPA), is widely used to study quasiparticle (QP) energies. Motivated by the desirable features of the particle-particle (pp) RPA compared to the conventional phRPA, we explore the pp counterpart of GW, that is, the T-matrix self-energy, formulated with the eigenvectors and eigenvalues of the ppRPA matrix. We demonstrate the accuracy of the T-matrix method for molecular QP energies, highlighting the importance of the pp channel for calculating QP spectra.
Directory of Open Access Journals (Sweden)
Ken Yano
2016-01-01
Full Text Available This paper proposes a novel fixed low-rank spatial filter estimation for brain computer interface (BCI systems with an application that recognizes emotions elicited by movies. The proposed approach unifies such tasks as feature extraction, feature selection, and classification, which are often independently tackled in a “bottom-up” manner, under a regularized loss minimization problem. The loss function is explicitly derived from the conventional BCI approach and solves its minimization by optimization with a nonconvex fixed low-rank constraint. For evaluation, an experiment was conducted to induce emotions by movies for dozens of young adult subjects and estimated the emotional states using the proposed method. The advantage of the proposed method is that it combines feature selection, feature extraction, and classification into a monolithic optimization problem with a fixed low-rank regularization, which implicitly estimates optimal spatial filters. The proposed method shows competitive performance against the best CSP-based alternatives.
Accelerated cardiac cine MRI using locally low rank and finite difference constraints.
Miao, Xin; Lingala, Sajan Goud; Guo, Yi; Jao, Terrence; Usman, Muhammad; Prieto, Claudia; Nayak, Krishna S
2016-07-01
To evaluate the potential value of combining multiple constraints for highly accelerated cardiac cine MRI. A locally low rank (LLR) constraint and a temporal finite difference (FD) constraint were combined to reconstruct cardiac cine data from highly undersampled measurements. Retrospectively undersampled 2D Cartesian reconstructions were quantitatively evaluated against fully-sampled data using normalized root mean square error, structural similarity index (SSIM) and high frequency error norm (HFEN). This method was also applied to 2D golden-angle radial real-time imaging to facilitate single breath-hold whole-heart cine (12 short-axis slices, 9-13s single breath hold). Reconstruction was compared against state-of-the-art constrained reconstruction methods: LLR, FD, and k-t SLR. At 10 to 60 spokes/frame, LLR+FD better preserved fine structures and depicted myocardial motion with reduced spatio-temporal blurring in comparison to existing methods. LLR yielded higher SSIM ranking than FD; FD had higher HFEN ranking than LLR. LLR+FD combined the complimentary advantages of the two, and ranked the highest in all metrics for all retrospective undersampled cases. Single breath-hold multi-slice cardiac cine with prospective undersampling was enabled with in-plane spatio-temporal resolutions of 2×2mm(2) and 40ms. Highly accelerated cardiac cine is enabled by the combination of 2D undersampling and the synergistic use of LLR and FD constraints. Copyright © 2016 Elsevier Inc. All rights reserved.
Low-rank extremal positive-partial-transpose states and unextendible product bases
International Nuclear Information System (INIS)
Leinaas, Jon Magne; Sollid, Per Oyvind; Myrheim, Jan
2010-01-01
It is known how to construct, in a bipartite quantum system, a unique low-rank entangled mixed state with positive partial transpose (a PPT state) from an unextendible product basis (UPB), defined as an unextendible set of orthogonal product vectors. We point out that a state constructed in this way belongs to a continuous family of entangled PPT states of the same rank, all related by nonsingular unitary or nonunitary product transformations. The characteristic property of a state ρ in such a family is that its kernel Ker ρ has a generalized UPB, a basis of product vectors, not necessarily orthogonal, with no product vector in Im ρ, the orthogonal complement of Ker ρ. The generalized UPB in Ker ρ has the special property that it can be transformed to orthogonal form by a product transformation. In the case of a system of dimension 3x3, we give a complete parametrization of orthogonal UPBs. This is then a parametrization of families of rank 4 entangled (and extremal) PPT states, and we present strong numerical evidence that it is a complete classification of such states. We speculate that the lowest rank entangled and extremal PPT states also in higher dimensions are related to generalized, nonorthogonal UPBs in similar ways.
Low rank alternating direction method of multipliers reconstruction for MR fingerprinting.
Assländer, Jakob; Cloos, Martijn A; Knoll, Florian; Sodickson, Daniel K; Hennig, Jürgen; Lattanzi, Riccardo
2018-01-01
The proposed reconstruction framework addresses the reconstruction accuracy, noise propagation and computation time for magnetic resonance fingerprinting. Based on a singular value decomposition of the signal evolution, magnetic resonance fingerprinting is formulated as a low rank (LR) inverse problem in which one image is reconstructed for each singular value under consideration. This LR approximation of the signal evolution reduces the computational burden by reducing the number of Fourier transformations. Also, the LR approximation improves the conditioning of the problem, which is further improved by extending the LR inverse problem to an augmented Lagrangian that is solved by the alternating direction method of multipliers. The root mean square error and the noise propagation are analyzed in simulations. For verification, in vivo examples are provided. The proposed LR alternating direction method of multipliers approach shows a reduced root mean square error compared to the original fingerprinting reconstruction, to a LR approximation alone and to an alternating direction method of multipliers approach without a LR approximation. Incorporating sensitivity encoding allows for further artifact reduction. The proposed reconstruction provides robust convergence, reduced computational burden and improved image quality compared to other magnetic resonance fingerprinting reconstruction approaches evaluated in this study. Magn Reson Med 79:83-96, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
International Nuclear Information System (INIS)
Valero Valero, Nelson; Rodriguez Salazar, Luz Nidia; Mancilla Gomez, Sandra; Contreras Bayona, Leydis
2012-01-01
Bacteria capable of low rank coal (LRC) biotransform were isolated from environmental samples altered with coal in the mine The Cerrejon. A protocol was designed to select strains more capable of LRC biotransform, the protocol includes isolation in a selective medium with LRC powder, qualitative and quantitative tests for LRC solubilization in solid and liquid culture medium. Of 75 bacterial strains isolated, 32 showed growth in minimal salts agar with 5 % carbon. The strains that produce higher values of humic substances (HS) have a mechanism of solubilization associated with pH changes in the culture medium, probably related to the production of extracellular alkaline substances by bacteria. The largest number of strains and bacteria with more solubilizing activity on LRC were isolated from sludge with high content of carbon residue and rhizosphere of Typha domingensis and Cenchrus ciliaris grown on sediments mixed with carbon particles, this result suggests that obtaining and solubilization capacity of LRC by bacteria may be related to the microhabitat where the populations originated.
Low rank approach to computing first and higher order derivatives using automatic differentiation
International Nuclear Information System (INIS)
Reed, J. A.; Abdel-Khalik, H. S.; Utke, J.
2012-01-01
This manuscript outlines a new approach for increasing the efficiency of applying automatic differentiation (AD) to large scale computational models. By using the principles of the Efficient Subspace Method (ESM), low rank approximations of the derivatives for first and higher orders can be calculated using minimized computational resources. The output obtained from nuclear reactor calculations typically has a much smaller numerical rank compared to the number of inputs and outputs. This rank deficiency can be exploited to reduce the number of derivatives that need to be calculated using AD. The effective rank can be determined according to ESM by computing derivatives with AD at random inputs. Reduced or pseudo variables are then defined and new derivatives are calculated with respect to the pseudo variables. Two different AD packages are used: OpenAD and Rapsodia. OpenAD is used to determine the effective rank and the subspace that contains the derivatives. Rapsodia is then used to calculate derivatives with respect to the pseudo variables for the desired order. The overall approach is applied to two simple problems and to MATWS, a safety code for sodium cooled reactors. (authors)
Structured Matrix Completion with Applications to Genomic Data Integration.
Cai, Tianxi; Cai, T Tony; Zhang, Anru
2016-01-01
Matrix completion has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.
Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the Use of Low-Rank Coal
Energy Technology Data Exchange (ETDEWEB)
Rader, Jeff; Aguilar, Kelly; Aldred, Derek; Chadwick, Ronald; Conchieri, John; Dara, Satyadileep; Henson, Victor; Leininger, Tom; Liber, Pawel; Liber, Pawel; Lopez-Nakazono, Benito; Pan, Edward; Ramirez, Jennifer; Stevenson, John; Venkatraman, Vignesh
2012-03-30
The purpose of this project was to evaluate the ability of advanced low rank coal gasification technology to cause a significant reduction in the COE for IGCC power plants with 90% carbon capture and sequestration compared with the COE for similarly configured IGCC plants using conventional low rank coal gasification technology. GE’s advanced low rank coal gasification technology uses the Posimetric Feed System, a new dry coal feed system based on GE’s proprietary Posimetric Feeder. In order to demonstrate the performance and economic benefits of the Posimetric Feeder in lowering the cost of low rank coal-fired IGCC power with carbon capture, two case studies were completed. In the Base Case, the gasifier was fed a dilute slurry of Montana Rosebud PRB coal using GE’s conventional slurry feed system. In the Advanced Technology Case, the slurry feed system was replaced with the Posimetric Feed system. The process configurations of both cases were kept the same, to the extent possible, in order to highlight the benefit of substituting the Posimetric Feed System for the slurry feed system.
Co-pyrolysis of low rank coals and biomass: Product distributions
Energy Technology Data Exchange (ETDEWEB)
Soncini, Ryan M.; Means, Nicholas C.; Weiland, Nathan T.
2013-10-01
Pyrolysis and gasification of combined low rank coal and biomass feeds are the subject of much study in an effort to mitigate the production of green house gases from integrated gasification combined cycle (IGCC) systems. While co-feeding has the potential to reduce the net carbon footprint of commercial gasification operations, the effects of co-feeding on kinetics and product distributions requires study to ensure the success of this strategy. Southern yellow pine was pyrolyzed in a semi-batch type drop tube reactor with either Powder River Basin sub-bituminous coal or Mississippi lignite at several temperatures and feed ratios. Product gas composition of expected primary constituents (CO, CO{sub 2}, CH{sub 4}, H{sub 2}, H{sub 2}O, and C{sub 2}H{sub 4}) was determined by in-situ mass spectrometry while minor gaseous constituents were determined using a GC-MS. Product distributions are fit to linear functions of temperature, and quadratic functions of biomass fraction, for use in computational co-pyrolysis simulations. The results are shown to yield significant nonlinearities, particularly at higher temperatures and for lower ranked coals. The co-pyrolysis product distributions evolve more tar, and less char, CH{sub 4}, and C{sub 2}H{sub 4}, than an additive pyrolysis process would suggest. For lignite co-pyrolysis, CO and H{sub 2} production are also reduced. The data suggests that evolution of hydrogen from rapid pyrolysis of biomass prevents the crosslinking of fragmented aromatic structures during coal pyrolysis to produce tar, rather than secondary char and light gases. Finally, it is shown that, for the two coal types tested, co-pyrolysis synergies are more significant as coal rank decreases, likely because the initial structure in these coals contains larger pores and smaller clusters of aromatic structures which are more readily retained as tar in rapid co-pyrolysis.
OXIDATION OF MERCURY ACROSS SCR CATALYSTS IN COAL-FIRED POWER PLANTS BURNING LOW RANK FUELS
Energy Technology Data Exchange (ETDEWEB)
Constance Senior; Temi Linjewile
2003-07-25
This is the first Quarterly Technical Report for DOE Cooperative Agreement No: DE-FC26-03NT41728. The objective of this program is to measure the oxidation of mercury in flue gas across SCR catalyst in a coal-fired power plant burning low rank fuels using a slipstream reactor containing multiple commercial catalysts in parallel. The Electric Power Research Institute (EPRI) and Ceramics GmbH are providing co-funding for this program. This program contains multiple tasks and good progress is being made on all fronts. During this quarter, analysis of the coal, ash and mercury speciation data from the first test series was completed. Good agreement was shown between different methods of measuring mercury in the flue gas: Ontario Hydro, semi-continuous emission monitor (SCEM) and coal composition. There was a loss of total mercury across the commercial catalysts, but not across the blank monolith. The blank monolith showed no oxidation. The data from the first test series show the same trend in mercury oxidation as a function of space velocity that has been seen elsewhere. At space velocities in the range of 6,000-7,000 hr{sup -1} the blank monolith did not show any mercury oxidation, with or without ammonia present. Two of the commercial catalysts clearly showed an effect of ammonia. Two other commercial catalysts showed an effect of ammonia, although the error bars for the no-ammonia case are large. A test plan was written for the second test series and is being reviewed.
Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.
Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan
2018-02-01
The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
1989-12-31
This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).
Ritz, S; Turzynski, A; Schütz, H W; Hollmann, A; Rochholz, G
1996-01-12
Age at death determination based on aspartic acid racemization in dentin has been applied successfully in forensic odontology for several years now. An age-dependent accumulation of D-aspartic acid has also recently been demonstrated in bone osteocalcin, one of the most abundant noncollagenous proteins of the organic bone matrix. Evaluation of these initial data on in vivo racemization of aspartic acid in bone osteocalcin was taken a step further. After purification of osteocalcin from 53 skull bone specimens, the extent of aspartic acid racemization in this peptide was determined. The D-aspartic acid content of purified bone osteocalcin exhibited a very close relationship to age at death. This confirmed identification of bone osteocalcin as a permanent, 'aging' peptide of the organic bone matrix. Its D-aspartic acid content may be used as a measure of its age and hence that of the entire organism. The new biochemical approach to determination of age at death by analyzing bone is complex and demanding from a methodologic point of view, but appears to be superior in precision and reproducibility to most other methods applicable to bone.
El Gharamti, Mohamad; Hoteit, Ibrahim; Sun, Shuyu
2012-01-01
Accurate knowledge of the movement of contaminants in porous media is essential to track their trajectory and later extract them from the aquifer. A two-dimensional flow model is implemented and then applied on a linear contaminant transport model
Directory of Open Access Journals (Sweden)
Chuan-yun LI
2011-12-01
Full Text Available Objective The present study investigates the influence of professional stress and social support on professional burnout among low-rank army officers.Methods The professional stress,social support,and professional burnout scales among low-rank army officers were used as test tools.Moreover,the officers of established units(battalion,company,and platoon were chosen as test subjects.Out of the 260 scales sent,226 effective scales were received.The descriptive statistic and canonical correlation analysis models were used to analyze the influence of each variable.Results The scores of low-rank army officers in the professional stress,social support,and professional burnout scales were more than average,except on two factors,namely,interpersonal support and de-individualization.The canonical analysis identified three groups of canonical correlation factors,of which two were up to a significant level(P < 0.001.After further eliminating the social support variable,the canonical correlation analysis of professional stress and burnout showed that the canonical correlation coefficients P corresponding to 1 and 2 were 0.62 and 0.36,respectively,and were up to a very significant level(P < 0.001.Conclusion The low-rank army officers experience higher professional stress and burnout levels,showing a lower sense of accomplishment,emotional exhaustion,and more serious depersonalization.However,social support can reduce the onset and seriousness of professional burnout among these officers by lessening pressure factors,such as career development,work features,salary conditions,and other personal factors.
Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.
2017-12-01
The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.
Santos, Hugo M; Reboiro-Jato, Miguel; Glez-Peña, Daniel; Nunes-Miranda, J D; Fdez-Riverola, Florentino; Carvallo, R; Capelo, J L
2010-09-15
The decision peptide-driven tool implements a software application for assisting the user in a protocol for accurate protein quantification based on the following steps: (1) protein separation through gel electrophoresis; (2) in-gel protein digestion; (3) direct and inverse (18)O-labeling and (4) matrix assisted laser desorption ionization time of flight mass spectrometry, MALDI analysis. The DPD software compares the MALDI results of the direct and inverse (18)O-labeling experiments and quickly identifies those peptides with paralleled loses in different sets of a typical proteomic workflow. Those peptides are used for subsequent accurate protein quantification. The interpretation of the MALDI data from direct and inverse labeling experiments is time-consuming requiring a significant amount of time to do all comparisons manually. The DPD software shortens and simplifies the searching of the peptides that must be used for quantification from a week to just some minutes. To do so, it takes as input several MALDI spectra and aids the researcher in an automatic mode (i) to compare data from direct and inverse (18)O-labeling experiments, calculating the corresponding ratios to determine those peptides with paralleled losses throughout different sets of experiments; and (ii) allow to use those peptides as internal standards for subsequent accurate protein quantification using (18)O-labeling. In this work the DPD software is presented and explained with the quantification of protein carbonic anhydrase. Copyright (c) 2010 Elsevier B.V. All rights reserved.
Geogenic organic contaminants in the low-rank coal-bearing Carrizo-Wilcox aquifer of East Texas, USA
Chakraborty, Jayeeta; Varonka, Matthew S.; Orem, William H.; Finkelman, Robert B.; Manton, William
2017-01-01
The organic composition of groundwater along the Carrizo-Wilcox aquifer in East Texas (USA), sampled from rural wells in May and September 2015, was examined as part of a larger study of the potential health and environmental effects of organic compounds derived from low-rank coals. The quality of water from the low-rank coal-bearing Carrizo-Wilcox aquifer is a potential environmental concern and no detailed studies of the organic compounds in this aquifer have been published. Organic compounds identified in the water samples included: aliphatics and their fatty acid derivatives, phenols, biphenyls, N-, O-, and S-containing heterocyclic compounds, polycyclic aromatic hydrocarbons (PAHs), aromatic amines, and phthalates. Many of the identified organic compounds (aliphatics, phenols, heterocyclic compounds, PAHs) are geogenic and originated from groundwater leaching of young and unmetamorphosed low-rank coals. Estimated concentrations of individual compounds ranged from about 3.9 to 0.01 μg/L. In many rural areas in East Texas, coal strata provide aquifers for drinking water wells. Organic compounds observed in groundwater are likely to be present in drinking water supplied from wells that penetrate the coal. Some of the organic compounds identified in the water samples are potentially toxic to humans, but at the estimated levels in these samples, the compounds are unlikely to cause acute health problems. The human health effects of low-level chronic exposure to coal-derived organic compounds in drinking water in East Texas are currently unknown, and continuing studies will evaluate possible toxicity.
A Direct Elliptic Solver Based on Hierarchically Low-Rank Schur Complements
Chávez, Gustavo
2017-03-17
A parallel fast direct solver for rank-compressible block tridiagonal linear systems is presented. Algorithmic synergies between Cyclic Reduction and Hierarchical matrix arithmetic operations result in a solver with O(Nlog2N) arithmetic complexity and O(NlogN) memory footprint. We provide a baseline for performance and applicability by comparing with well-known implementations of the $$\\\\mathcal{H}$$ -LU factorization and algebraic multigrid within a shared-memory parallel environment that leverages the concurrency features of the method. Numerical experiments reveal that this method is comparable with other fast direct solvers based on Hierarchical Matrices such as $$\\\\mathcal{H}$$ -LU and that it can tackle problems where algebraic multigrid fails to converge.
Hierarchical matrix techniques for the solution of elliptic equations
Chá vez, Gustavo; Turkiyyah, George; Yokota, Rio; Keyes, David E.
2014-01-01
Hierarchical matrix approximations are a promising tool for approximating low-rank matrices given the compactness of their representation and the economy of the operations between them. Integral and differential operators have been the major
Energy Technology Data Exchange (ETDEWEB)
Ohtsuka, Y.; Asami, K. [Tokoku University, Sendai (Japan). Inst. for Chemical Reaction Science
1996-03-01
Interactions between CaCO{sub 3} and low-rank coals were examined, and the steam gasification of the resulting Ca-loaded coals was carried out at 973 K with a thermobalance. Chemical analysis and FT-IR spectra show that CaCO{sub 3} can react readily with COOH groups to form ion-exchanged Ca and CO{sub 2} when mixed with brown coal in water at room temperature. The extent of the exchange is dependent on the crystalline form of CaCO{sub 3}, and higher for aragonite naturally present in seashells and coral reef than for calcite from limestone. The FT-IR spectra reveal that ion-exchange reactions also proceed during kneading CaCO{sub 3} with low-rank coals. The exchanged Ca promotes gasification and achieves 40-60 fold rate enhancement for brown coal with a lower content of inherent minerals. Catalyst effectiveness of kneaded CaCO{sub 3} depends on the coal type, in other words, the extent of ion exchange. 11 refs., 7 figs., 3 tabs.
Energy Technology Data Exchange (ETDEWEB)
Sugiyama, T [Center for Coal Utilization, Japan, Tokyo (Japan); Tsurui, M; Suto, Y; Asakura, M [JGC Corp., Tokyo (Japan); Ogawa, J; Yui, M; Takano, S [Japan COM Co. Ltd., Japan, Tokyo (Japan)
1996-09-01
A CWM manufacturing technology was developed by means of upgrading low rank coals. Even though some low rank coals have such advantages as low ash, low sulfur and high volatile matter content, many of them are merely used on a small scale in areas near the mine-mouths because of high moisture content, low calorification and high ignitability. Therefore, discussions were given on a coal fuel manufacturing technology by which coal will be irreversibly dehydrated with as much volatile matters as possible remaining in the coal, and the coal is made high-concentration CWM, thus the coal can be safely transported and stored. The technology uses a method to treat coal with hot water under high pressure and dry it with hot water. The method performs not only removal of water, but also irreversible dehydration without losing volatile matters by decomposing hydrophilic groups on surface and blocking micro pores with volatile matters in the coal (wax and tar). The upgrading effect was verified by processing coals in a pilot plant, which derived greater calorification and higher concentration CWM than with the conventional processes. A CWM combustion test proved lower NOx, lower SOx and higher combustion rate than for bituminous coal. The ash content was also found lower. This process suits a Texaco-type gasification furnace. For a production scale of three million tons a year, the production cost is lower by 2 yen per 10 {sup 3} kcal than for heavy oil with the same sulfur content. 11 figs., 15 tabs.
Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker
2017-08-01
Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.
Das, Siddhartha; Siopsis, George; Weedbrook, Christian
2018-02-01
With the significant advancement in quantum computation during the past couple of decades, the exploration of machine-learning subroutines using quantum strategies has become increasingly popular. Gaussian process regression is a widely used technique in supervised classical machine learning. Here we introduce an algorithm for Gaussian process regression using continuous-variable quantum systems that can be realized with technology based on photonic quantum computers under certain assumptions regarding distribution of data and availability of efficient quantum access. Our algorithm shows that by using a continuous-variable quantum computer a dramatic speedup in computing Gaussian process regression can be achieved, i.e., the possibility of exponentially reducing the time to compute. Furthermore, our results also include a continuous-variable quantum-assisted singular value decomposition method of nonsparse low rank matrices and forms an important subroutine in our Gaussian process regression algorithm.
Energy Technology Data Exchange (ETDEWEB)
Takarada, Y; Kato, K; Kuroda, M; Nakagawa, N [Gunma University, Gunma (Japan). Faculty of Engineering; Roman, M [New Energy and Industrial Technology Development Organization, Tokyo, (Japan)
1997-02-01
Experiment reveals the characteristics of low rank coal serving as a desulfurizing material in fluidized coal bed reactor with oxygen-containing functional groups exchanged with Ca ions. This effort aims at identifying inexpensive Ca materials and determining the desulfurizing characteristics of Ca-carrying brown coal. A slurry of cement sludge serving as a Ca source and low rank coal is agitated for the exchange of functional groups and Ca ions, and the desulfurizing characteristics of the Ca-carrying brown coal is determined. The Ca-carrying brown coal and high-sulfur coal char is mixed and incinerated in a fluidized bed reactor, and it is found that a desulfurization rate of 75% is achieved when the Ca/S ratio is 1 in the desulfurization of SO2. This rate is far higher than the rate obtained when limestone or cement sludge without preliminary treatment is used as a desulfurizer. Next, Ca-carrying brown coal and H2S are caused to react upon each other in a fixed bed reactor, and then it is found that desulfurization characteristics are not dependent on the diameter of the Ca-carrying brown coal grain, that the coal is different from limestone in that it stays quite active against H2S for long 40 minutes after the start of the reaction, and that CaO small in crystal diameter is dispersed in quantities into the char upon thermal disintegration of Ca-carrying brown coal to cause the coal to say quite active. 5 figs.
Alanio, A; Beretti, J-L; Dauphin, B; Mellado, E; Quesne, G; Lacroix, C; Amara, A; Berche, P; Nassif, X; Bougnoux, M-E
2011-05-01
New Aspergillus species have recently been described with the use of multilocus sequencing in refractory cases of invasive aspergillosis. The classical phenotypic identification methods routinely used in clinical laboratories failed to identify them adequately. Some of these Aspergillus species have specific patterns of susceptibility to antifungal agents, and misidentification may lead to inappropriate therapy. We developed a matrix-assisted laser desorption ionization time-of-flight (MALDI-TOF) mass spectrometry (MS)-based strategy to adequately identify Aspergillus species to the species level. A database including the reference spectra of 28 clinically relevant species from seven Aspergillus sections (five common and 23 unusual species) was engineered. The profiles of young and mature colonies were analysed for each reference strain, and species-specific spectral fingerprints were identified. The performance of the database was then tested on 124 clinical and 16 environmental isolates previously characterized by partial sequencing of the β-tubulin and calmodulin genes. One hundred and thirty-eight isolates of 140 (98.6%) were correctly identified. Two atypical isolates could not be identified, but no isolate was misidentified (specificity: 100%). The database, including species-specific spectral fingerprints of young and mature colonies of the reference strains, allowed identification regardless of the maturity of the clinical isolate. These results indicate that MALDI-TOF MS is a powerful tool for rapid and accurate identification of both common and unusual species of Aspergillus. It can give better results than morphological identification in clinical laboratories. © 2010 The Authors. Clinical Microbiology and Infection © 2010 European Society of Clinical Microbiology and Infectious Diseases.
Directory of Open Access Journals (Sweden)
Mahidin Mahidin
2016-08-01
Full Text Available Calcium oxide-based material is available abundantly and naturally. A potential resource of that material comes from marine mollusk shell such as clams, scallops, mussels, oysters, winkles and nerites. The CaO-based material has exhibited a good performance as the desulfurizer oradsorbent in coal combustion in order to reduce SO2 emission. In this study, pulverized green mussel shell, without calcination, was utilized as the desulfurizer in the briquette produced from a mixture of low rank coal and palm kernel shell (PKS, also known as bio-briquette. The ratio ofcoal to PKS in the briquette was 90:10 (wt/wt. The influence of green mussel shell contents and combustion temperature were examined to prove the possible use of that materialas a desulfurizer. The ratio of Ca to S (Ca = calcium content in desulfurizer; S = sulfur content in briquette werefixed at 1:1, 1.25:1, 1.5:1, 1.75:1, and 2:1 (mole/mole. The burning (or desulfurization temperature range was 300-500 °C; the reaction time was 720 seconds and the air flow rate was 1.2 L/min. The results showed that green mussel shell can be introduced as a desulfurizer in coal briquette or bio-briquette combustions. The desulfurization process using that desulfurizer exhibited the first order reaction and the highest average efficiency of 84.5%.
Energy Technology Data Exchange (ETDEWEB)
Mastral, A.M.; Perez-Surio, M.J.; Palacios, J.M. [CSIC, Zaragoza (Spain). Inst. de Carboquimica
1998-05-01
The paper discusses the thermal and chemical changes taking place on a low rank coal when it is subjected to hydropyrolysis conditions with Red Mud as the catalytic precursor. For each run, 5 g of coal were pyrolysed in a swept fixed bed reactor at 40 kg/cm{sup 2} hydrogen pressure. The variables of the process were: temperatures ranging from 400 to 600{degree}C; 0.5 and 2 l/min of hydrogen flow; 10 and 30 min residence time; and in the presence and absence of Red Mud. Conversion products distribution and a wide battery of complementary analyses allow information to be gathered regarding the changes undergone by the coal structure, both in its organic and inorganic components, in its conversion into liquids and chars. From the data obtained, it can be deduced that: (1) at 400{degree}C the iron catalyst is not active; (2) at higher temperatures iron catalytic cracking is observed more than hydrogenating activity, due to the Fe{sub 2}O{sub 3} transformation into (Fe{sub 3}S{sub 4}) crystallographically as spinel; (3) in this coal hydropyrolysis one third of the coal is converted into liquids; and (4) Red Mud helps to reduce sulfur emissions by H{sub 2}S fixation as Fe{sub 3}S{sub 4}. 10 refs., 5 figs., 5 tabs.
Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the Use of Low-Rank Coal
Energy Technology Data Exchange (ETDEWEB)
Rader, Jeff; Aguilar, Kelly; Aldred, Derek; Chadwick, Ronald; Conchieri,; Dara, Satyadileep; Henson, Victor; Leininger, Tom; Liber, Pawel; Nakazono, Benito; Pan, Edward; Ramirez, Jennifer; Stevenson, John; Venkatraman, Vignesh
2012-11-30
This report describes the development of the design of an advanced dry feed system that was carried out under Task 4.0 of Cooperative Agreement DE-FE0007902 with the US DOE, “Scoping Studies to Evaluate the Benefits of an Advanced Dry Feed System on the use of Low- Rank Coal.” The resulting design will be used for the advanced technology IGCC case with 90% carbon capture for sequestration to be developed under Task 5.0 of the same agreement. The scope of work covered coal preparation and feeding up through the gasifier injector. Subcomponents have been broken down into feed preparation (including grinding and drying), low pressure conveyance, pressurization, high pressure conveyance, and injection. Pressurization of the coal feed is done using Posimetric1 Feeders sized for the application. In addition, a secondary feed system is described for preparing and feeding slag additive and recycle fines to the gasifier injector. This report includes information on the basis for the design, requirements for down selection of the key technologies used, the down selection methodology and the final, down selected design for the Posimetric Feed System, or PFS.
Directory of Open Access Journals (Sweden)
Ryan Wen Liu
2017-03-01
Full Text Available Dynamic magnetic resonance imaging (MRI has been extensively utilized for enhancing medical living environment visualization, however, in clinical practice it often suffers from long data acquisition times. Dynamic imaging essentially reconstructs the visual image from raw (k,t-space measurements, commonly referred to as big data. The purpose of this work is to accelerate big medical data acquisition in dynamic MRI by developing a non-convex minimization framework. In particular, to overcome the inherent speed limitation, both non-convex low-rank and sparsity constraints were combined to accelerate the dynamic imaging. However, the non-convex constraints make the dynamic reconstruction problem difficult to directly solve through the commonly-used numerical methods. To guarantee solution efficiency and stability, a numerical algorithm based on Alternating Direction Method of Multipliers (ADMM is proposed to solve the resulting non-convex optimization problem. ADMM decomposes the original complex optimization problem into several simple sub-problems. Each sub-problem has a closed-form solution or could be efficiently solved using existing numerical methods. It has been proven that the quality of images reconstructed from fewer measurements can be significantly improved using non-convex minimization. Numerous experiments have been conducted on two in vivo cardiac datasets to compare the proposed method with several state-of-the-art imaging methods. Experimental results illustrated that the proposed method could guarantee the superior imaging performance in terms of quantitative and visual image quality assessments.
Directory of Open Access Journals (Sweden)
Mahidin Mahidin
2012-12-01
Full Text Available NOx and N2O emissions from coal combustion are claimed as the major contributors for the acid rain, photochemical smog, green house and ozone depletion problems. Based on the facts, study on those emissions formation is interest topic in the combustion area. In this paper, theoretical study by modeling and simulation on NOx and N2O formation in co-combustion of low-rank coal and palm kernel shell has been done. Combustion model was developed by using the principle of chemical-reaction equilibrium. Simulation on the model in order to evaluate the composition of the flue gas was performed by minimization the Gibbs free energy. The results showed that by introduced of biomass in coal combustion can reduce the NOx concentration in considerably level. Maximum NO level in co-combustion of low-rank coal and palm kernel shell with fuel composition 1:1 is 2,350 ppm, low enough compared to single low-rank coal combustion up to 3,150 ppm. Moreover, N2O is less than 0.25 ppm in all cases. Keywords: low-rank coal, N2O emission, NOx emission, palm kernel shell
Directory of Open Access Journals (Sweden)
Zutao Zhang
2016-06-01
Full Text Available Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety.
Energy Technology Data Exchange (ETDEWEB)
Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno
2016-09-15
The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input
International Nuclear Information System (INIS)
Konakli, Katerina; Sudret, Bruno
2016-01-01
The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely the exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input
Cheng, Jiubing
2014-08-05
In elastic imaging, the extrapolated vector fields are decomposed into pure wave modes, such that the imaging condition produces interpretable images, which characterize reflectivity of different reflection types. Conventionally, wavefield decomposition in anisotropic media is costly as the operators involved is dependent on the velocity, and thus not stationary. In this abstract, we propose an efficient approach to directly extrapolate the decomposed elastic waves using lowrank approximate mixed space/wavenumber domain integral operators for heterogeneous transverse isotropic (TI) media. The low-rank approximation is, thus, applied to the pseudospectral extrapolation and decomposition at the same time. The pseudo-spectral implementation also allows for relatively large time steps in which the low-rank approximation is applied. Synthetic examples show that it can yield dispersionfree extrapolation of the decomposed quasi-P (qP) and quasi- SV (qSV) modes, which can be used for imaging, as well as the total elastic wavefields.
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So
2017-09-01
A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.
International Nuclear Information System (INIS)
Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So
2017-01-01
Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.
Cioslowski, Jerzy; Strasburger, Krzysztof
2018-04-01
Electronic properties of several states of the five- and six-electron harmonium atoms are obtained from large-scale calculations employing explicitly correlated basis functions. The high accuracy of the computed energies (including their components), natural spinorbitals, and their occupation numbers makes them suitable for testing, calibration, and benchmarking of approximate formalisms of quantum chemistry and solid state physics. In the case of the five-electron species, the availability of the new data for a wide range of the confinement strengths ω allows for confirmation and generalization of the previously reached conclusions concerning the performance of the presently known approximations for the electron-electron repulsion energy in terms of the 1-matrix that are at heart of the density matrix functional theory (DMFT). On the other hand, the properties of the three low-lying states of the six-electron harmonium atom, computed at ω = 500 and ω = 1000, uncover deficiencies of the 1-matrix functionals not revealed by previous studies. In general, the previously published assessment of the present implementations of DMFT being of poor accuracy is found to hold. Extending the present work to harmonically confined systems with even more electrons is most likely counterproductive as the steep increase in computational cost required to maintain sufficient accuracy of the calculated properties is not expected to be matched by the benefits of additional information gathered from the resulting benchmarks.
Salient Object Detection via Structured Matrix Decomposition.
Peng, Houwen; Li, Bing; Ling, Haibin; Hu, Weiming; Xiong, Weihua; Maybank, Stephen J
2016-05-04
Low-rank recovery models have shown potential for salient object detection, where a matrix is decomposed into a low-rank matrix representing image background and a sparse matrix identifying salient objects. Two deficiencies, however, still exist. First, previous work typically assumes the elements in the sparse matrix are mutually independent, ignoring the spatial and pattern relations of image regions. Second, when the low-rank and sparse matrices are relatively coherent, e.g., when there are similarities between the salient objects and background or when the background is complicated, it is difficult for previous models to disentangle them. To address these problems, we propose a novel structured matrix decomposition model with two structural regularizations: (1) a tree-structured sparsity-inducing regularization that captures the image structure and enforces patches from the same object to have similar saliency values, and (2) a Laplacian regularization that enlarges the gaps between salient objects and the background in feature space. Furthermore, high-level priors are integrated to guide the matrix decomposition and boost the detection. We evaluate our model for salient object detection on five challenging datasets including single object, multiple objects and complex scene images, and show competitive results as compared with 24 state-of-the-art methods in terms of seven performance metrics.
Matrix factorization-based data fusion for the prediction of lncRNA-disease associations.
Fu, Guangyuan; Wang, Jun; Domeniconi, Carlotta; Yu, Guoxian
2018-05-01
Long non-coding RNAs (lncRNAs) play crucial roles in complex disease diagnosis, prognosis, prevention and treatment, but only a small portion of lncRNA-disease associations have been experimentally verified. Various computational models have been proposed to identify lncRNA-disease associations by integrating heterogeneous data sources. However, existing models generally ignore the intrinsic structure of data sources or treat them as equally relevant, while they may not be. To accurately identify lncRNA-disease associations, we propose a Matrix Factorization based LncRNA-Disease Association prediction model (MFLDA in short). MFLDA decomposes data matrices of heterogeneous data sources into low-rank matrices via matrix tri-factorization to explore and exploit their intrinsic and shared structure. MFLDA can select and integrate the data sources by assigning different weights to them. An iterative solution is further introduced to simultaneously optimize the weights and low-rank matrices. Next, MFLDA uses the optimized low-rank matrices to reconstruct the lncRNA-disease association matrix and thus to identify potential associations. In 5-fold cross validation experiments to identify verified lncRNA-disease associations, MFLDA achieves an area under the receiver operating characteristic curve (AUC) of 0.7408, at least 3% higher than those given by state-of-the-art data fusion based computational models. An empirical study on identifying masked lncRNA-disease associations again shows that MFLDA can identify potential associations more accurately than competing models. A case study on identifying lncRNAs associated with breast, lung and stomach cancers show that 38 out of 45 (84%) associations predicted by MFLDA are supported by recent biomedical literature and further proves the capability of MFLDA in identifying novel lncRNA-disease associations. MFLDA is a general data fusion framework, and as such it can be adopted to predict associations between other biological
Energy Technology Data Exchange (ETDEWEB)
Oki, A.; Xie, X.; Nakajima, T.; Maeda, S. [Kagoshima University, Kagoshima (Japan). Faculty of Engineering
1996-10-28
With an objective to learn mechanisms in low-rank coal reformation processes, change of properties on coal surface was discussed. Difficulty in handling low-rank coal is attributed to large intrinsic water content. Since it contains highly volatile components, it has a danger of spontaneous ignition. The hot water drying (HWD) method was used for reformation. Coal which has been dry-pulverized to a grain size of 1 mm or smaller was mixed with water to make slurry, heated in an autoclave, cooled, filtered, and dried in vacuum. The HWD applied to Loy Yang and Yallourn coals resulted in rapid rise in pressure starting from about 250{degree}C. Water content (ANA value) absorbed into the coal has decreased largely, with the surface made hydrophobic effectively due to high temperature and pressure. Hydroxyl group and carbonyl group contents in the coal have decreased largely with rising reformation treatment temperature (according to FT-IR measurement). Specific surface area of the original coal of the Loy Yang coal was 138 m{sup 2}/g, while it has decreased largely to 73 m{sup 2}/g when the reformation temperature was raised to 350{degree}C. This is because of volatile components dissolving from the coal as tar and blocking the surface pores. 2 refs., 4 figs.
Energy Technology Data Exchange (ETDEWEB)
Wu, Z.; Otsuka, Y. [Tohoku University, Sendai (Japan). Institute for Chemical Reaction Science
1996-10-28
In order to establish coal NOx preventive measures, discussions were given on formation of N2 in the fixed-bed pyrolysis of low rank coals and the mechanisms thereof. Chinese ZN coal and German RB coal were used for the discussions. Both coals do not produce N2 at 600{degree}C, and the main product is volatile nitrogen. Conversion into N2 does not depend on heating rates, but increases linearly with increasing temperature, and reaches 65% to 70% at 1200{degree}C. In contrast, char nitrogen decreases linearly with the temperature. More specifically, these phenomena suggest that the char nitrogen or its precursor is the major supply source of N2. When mineral substances are removed by using hydrochloric acid, their catalytic action is lost, and conversion into N2 decreases remarkably. Iron existing in ion-exchanged condition in low-rank coal is reduced and finely diffused into metallic iron particles. The particles react with heterocyclic nitrogen compounds and turn into iron nitride. A solid phase reaction mechanism may be conceived, in which N2 is produced due to decomposition of the iron nitride. 5 refs., 4 figs., 1 tab.
Directory of Open Access Journals (Sweden)
Zullaikah Siti
2018-01-01
Full Text Available The utilization of Indonesia low rank coal should be maximized, since the source of Indonesia law rank coals were abundant. Pyrolysis of this coal can produce liquid product which can be utilized as fuel and chemical feedstocks. The yield of liquid product is still low due to lower of comparison H/C. Since coal is non-renewable source, an effort of coal saving and to mitigate the production of greenhouse gases, biomass such as oil palm empty fruit bunch (EFB would added as co-feeding. EFB could act as hydrogen donor in co-pyrolysis to increase liquid product. Co-pyrolysis of Indonesia low rank coal and EFB were studied in a drop tube reactor under the certain temperature (t= 500 °C and time (t= 1 h used N2 as purge gas. The effect of blending ratios of coal/EFB (100/0, 75/25, 50/50, 25/75 and 0/100%, w/w % on the yield and composition of liquid product were studied systematically. The results showed that the higher blending ratio, the yield of liquid product and gas obtained increased, while the char decreased. The highest yield of liquid product (28,62 % was obtained used blending ratio of coal/EFB = 25/75, w/w%. Tar composition obtained in this ratio is phenol, polycyclic aromatic hydrocarbons, alkanes, acids, esters.
Wu, Zhiqiang; Yang, Wangcai; Yang, Bolun
2018-02-01
In this work, the influence of Nannochloropsis and Chlorella on the thermal behavior and surface morphology of char during the co-pyrolysis process were explored. Thermogravimetric and iso-conversional methods were applied to analyzing the pyrolytic and kinetic characteristics for different mass ratios of microalgae and low-rank coal (0, 3:1, 1:1, 1:3 and 1). Fractal theory was used to quantitatively determine the effect of microalgae on the morphological texture of co-pyrolysis char. The result indicated that both the Nannochloropsis and Chlorella promoted the release of volatile from low-rank coal. Different synergistic effects on the thermal parameters and yield of volatile were observed, which could be attributed to the different compositions in the Nannochloropsis and Chlorella and operating condition. The distribution of activation energies shows nonadditive characteristics. Fractal dimensions of the co-pyrolysis char were higher than the individual char, indicating the promotion of disordered degree due to the addition of microalgae. Copyright © 2017 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Xu, Q; Liu, H; Xing, L; Yu, H; Wang, G
2016-01-01
Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channel and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral
Energy Technology Data Exchange (ETDEWEB)
Xu, Q [Xi’an Jiaotong University, Xi’an (China); Stanford University School of Medicine, Stanford, CA (United States); Liu, H; Xing, L [Stanford University School of Medicine, Stanford, CA (United States); Yu, H [University of Massachusetts Lowell, Lowell, MA (United States); Wang, G [Rensselaer Polytechnic Instute., Troy, NY (United States)
2016-06-15
Purpose: Spectral CT enabled by an energy-resolved photon-counting detector outperforms conventional CT in terms of material discrimination, contrast resolution, etc. One reconstruction method for spectral CT is to generate a color image from a reconstructed component in each energy channel. However, given the radiation dose, the number of photons in each channel is limited, which will result in strong noise in each channel and affect the final color reconstruction. Here we propose a novel dictionary learning method for spectral CT that combines dictionary-based sparse representation method and the patch based low-rank constraint to simultaneously improve the reconstruction in each channel and to address the inter-channel correlations to further improve the reconstruction. Methods: The proposed method has two important features: (1) guarantee of the patch based sparsity in each energy channel, which is the result of the dictionary based sparse representation constraint; (2) the explicit consideration of the correlations among different energy channels, which is realized by patch-by-patch nuclear norm-based low-rank constraint. For each channel, the dictionary consists of two sub-dictionaries. One is learned from the average of the images in all energy channels, and the other is learned from the average of the images in all energy channels except the current channel. With average operation to reduce noise, these two dictionaries can effectively preserve the structural details and get rid of artifacts caused by noise. Combining them together can express all structural information in current channel. Results: Dictionary learning based methods can obtain better results than FBP and the TV-based method. With low-rank constraint, the image quality can be further improved in the channel with more noise. The final color result by the proposed method has the best visual quality. Conclusion: The proposed method can effectively improve the image quality of low-dose spectral
Energy Technology Data Exchange (ETDEWEB)
NONE
1997-03-01
The objective of this study is to develop a new efficient desulfurization technique using a Ca ion-exchanged coal prepared from low rank coal and calcium raw material as a SO{sub 2} sorbent. Ion-exchange of calcium was carried out by soaking and mixing brown coal particles in milk of lime or slurry of industrial waste from concrete manufacture process. About 10wt% of Ca was easily incorporated into Yallourn coal. The ion-exchanged Ca was transformed to ultra-fine CaO particles upon pyrolysis of coal. The reactivity of CaO produced from Ca-exchanged coal to SO{sub 2} was extraordinary high and the CaO utilization of above 90% was easily achieved, while the conversion of natural limestone was less than 30% under the similar experimental conditions. High activity of Ca-exchanged coal was appreciably observed in a pressurized fluidized bed combustor. Ca-exchanged coal was quite effective for the removal of hydrogen sulfide. (NEDO)
International Nuclear Information System (INIS)
Xu, Cheng; Bai, Pu; Xin, Tuantuan; Hu, Yue; Xu, Gang; Yang, Yongping
2017-01-01
Highlights: •An improved solar energy integrated LRC fired power generation is proposed. •High efficient and economic feasible solar energy conversion is achieved. •Cold-end losses of the boiler and condenser are reduced. •The energy and exergy efficiencies of the overall system are improved. -- Abstract: A novel solar energy integrated low-rank coal (LRC) fired power generation using coal pre-drying and an absorption heat pump (AHP) was proposed. The proposed integrated system efficiently utilizes the solar energy collected from the parabolic trough to drive the AHP to absorb the low-grade waste heat of the steam cycle, achieving larger amount of heat with suitable temperature for coal’s moisture removal prior to the furnace. Through employing the proposed system, the solar energy could be partially converted into the high-grade coal’s heating value and the cold-end losses of the boiler and the steam cycle could be reduced simultaneously, leading to a high-efficient solar energy conversion together with a preferable overall thermal efficiency of the power generation. The results of the detailed thermodynamic and economic analyses showed that, using the proposed integrated concept in a typical 600 MW LRC-fired power plant could reduce the raw coal consumption by 4.6 kg/s with overall energy and exergy efficiencies improvement of 1.2 and 1.8 percentage points, respectively, as 73.0 MW th solar thermal energy was introduced. The cost of the solar generated electric power could be as low as $0.044/kW h. This work provides an improved concept to further advance the solar energy conversion and utilisation in solar-hybrid coal-fired power generation.
Gregor, Ivan; Dröge, Johannes; Schirmer, Melanie; Quince, Christopher; McHardy, Alice C
2016-01-01
Background. Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. This is often achieved by a combination of sequence assembly and binning, where sequences are grouped into 'bins' representing taxa of the underlying microbial community. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for species bins recovery from deep-branching phyla is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in the model and identifies 'training' sequences based on marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area do not have. Results. We have developed PhyloPythiaS+, a successor to our PhyloPythia(S) software. The new (+) component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated the simultaneous counting of 4-6-mers used for taxonomic binning 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. PhyloPythiaS+ was compared to MEGAN, taxator-tk, Kraken and the generic PhyloPythiaS model. The results showed that PhyloPythiaS+ performs especially well for samples originating from novel environments in comparison to the other methods. Availability. PhyloPythiaS+ in a virtual machine is available for installation under Windows, Unix systems or OS X on: https://github.com/algbioi/ppsp/wiki.
Matrix and Tensor Completion on a Human Activity Recognition Framework.
Savvaki, Sofia; Tsagkatakis, Grigorios; Panousopoulou, Athanasia; Tsakalides, Panagiotis
2017-11-01
Sensor-based activity recognition is encountered in innumerable applications of the arena of pervasive healthcare and plays a crucial role in biomedical research. Nonetheless, the frequent situation of unobserved measurements impairs the ability of machine learning algorithms to efficiently extract context from raw streams of data. In this paper, we study the problem of accurate estimation of missing multimodal inertial data and we propose a classification framework that considers the reconstruction of subsampled data during the test phase. We introduce the concept of forming the available data streams into low-rank two-dimensional (2-D) and 3-D Hankel structures, and we exploit data redundancies using sophisticated imputation techniques, namely matrix and tensor completion. Moreover, we examine the impact of reconstruction on the classification performance by experimenting with several state-of-the-art classifiers. The system is evaluated with respect to different data structuring scenarios, the volume of data available for reconstruction, and various levels of missing values per device. Finally, the tradeoff between subsampling accuracy and energy conservation in wearable platforms is examined. Our analysis relies on two public datasets containing inertial data, which extend to numerous activities, multiple sensing parameters, and body locations. The results highlight that robust classification accuracy can be achieved through recovery, even for extremely subsampled data streams.
Directory of Open Access Journals (Sweden)
Ivan Gregor
2016-02-01
Full Text Available Background. Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. This is often achieved by a combination of sequence assembly and binning, where sequences are grouped into ‘bins’ representing taxa of the underlying microbial community. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for species bins recovery from deep-branching phyla is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in the model and identifies ‘training’ sequences based on marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area do not have. Results. We have developed PhyloPythiaS+, a successor to our PhyloPythia(S software. The new (+ component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated the simultaneous counting of 4–6-mers used for taxonomic binning 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. PhyloPythiaS+ was compared to MEGAN, taxator-tk, Kraken and the generic PhyloPythiaS model. The results showed that PhyloPythiaS+ performs especially well for samples originating from novel environments in comparison to the other methods. Availability. PhyloPythiaS+ in a virtual machine is available for installation under Windows, Unix systems or OS X on: https://github.com/algbioi/ppsp/wiki.
Hierarchical matrix techniques for the solution of elliptic equations
Chávez, Gustavo
2014-05-04
Hierarchical matrix approximations are a promising tool for approximating low-rank matrices given the compactness of their representation and the economy of the operations between them. Integral and differential operators have been the major applications of this technology, but they can be applied into other areas where low-rank properties exist. Such is the case of the Block Cyclic Reduction algorithm, which is used as a direct solver for the constant-coefficient Poisson quation. We explore the variable-coefficient case, also using Block Cyclic reduction, with the addition of Hierarchical Matrices to represent matrix blocks, hence improving the otherwise O(N2) algorithm, into an efficient O(N) algorithm.
Saleem, M
2002-01-01
The Unitarity of the CKM matrix is examined in the light of the latest available accurate data. The analysis shows that a conclusive result cannot be derived at present. Only more precise data can determine whether the CKM matrix opens new vistas beyond the standard model or not.
Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.
2016-11-01
This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.
Galler, Patrick; Limbeck, Andreas; Boulyga, Sergei F; Stingeder, Gerhard; Hirata, Takafumi; Prohaska, Thomas
2007-07-01
This work introduces a newly developed on-line flow injection (FI) Sr/Rb separation method as an alternative to the common, manual Sr/matrix batch separation procedure, since total analysis time is often limited by sample preparation despite the fast rate of data acquisition possible by inductively coupled plasma-mass spectrometers (ICPMS). Separation columns containing approximately 100 muL of Sr-specific resin were used for on-line FI Sr/matrix separation with subsequent determination of (87)Sr/(86)Sr isotope ratios by multiple collector ICPMS. The occurrence of memory effects exhibited by the Sr-specific resin, a major restriction to the repetitive use of this costly material, could successfully be overcome. The method was fully validated by means of certified reference materials. A set of two biological and six geological Sr- and Rb-bearing samples was successfully characterized for its (87)Sr/(86)Sr isotope ratios with precisions of 0.01-0.04% 2 RSD (n = 5-10). Based on our measurements we suggest (87)Sr/(86)Sr isotope ratios of 0.713 15 +/- 0.000 16 (2 SD) and 0.709 31 +/- 0.000 06 (2 SD) for the NIST SRM 1400 bone ash and the NIST SRM 1486 bone meal, respectively. Measured (87)Sr/(86)Sr isotope ratios for five basalt samples are in excellent agreement with published data with deviations from the published value ranging from 0 to 0.03%. A mica sample with a Rb/Sr ratio of approximately 1 was successfully characterized for its (87)Sr/(86)Sr isotope signature to be 0.718 24 +/- 0.000 29 (2 SD) by the proposed method. Synthetic samples with Rb/Sr ratios of up to 10/1 could successfully be measured without significant interferences on mass 87, which would otherwise bias the accuracy and uncertainty of the obtained data.
Extensions of linear-quadratic control, optimization and matrix theory
Jacobson, David H
1977-01-01
In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat
Franklin, Joel N
2003-01-01
Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.
International Nuclear Information System (INIS)
Krenciglowa, E.M.; Kung, C.L.; Kuo, T.T.S.; Osnes, E.; and Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794)
1976-01-01
Different definitions of the reaction matrix G appropriate to the calculation of nuclear structure are reviewed and discussed. Qualitative physical arguments are presented in support of a two-step calculation of the G-matrix for finite nuclei. In the first step the high-energy excitations are included using orthogonalized plane-wave intermediate states, and in the second step the low-energy excitations are added in, using harmonic oscillator intermediate states. Accurate calculations of G-matrix elements for nuclear structure calculations in the Aapprox. =18 region are performed following this procedure and treating the Pauli exclusion operator Q 2 /sub p/ by the method of Tsai and Kuo. The treatment of Q 2 /sub p/, the effect of the intermediate-state spectrum and the energy dependence of the reaction matrix are investigated in detail. The present matrix elements are compared with various matrix elements given in the literature. In particular, close agreement is obtained with the matrix elements calculated by Kuo and Brown using approximate methods
Spectrally accurate contour dynamics
International Nuclear Information System (INIS)
Van Buskirk, R.D.; Marcus, P.S.
1994-01-01
We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use
Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression
Halim Boukaram, Wajih
2017-09-14
We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.
Batched QR and SVD Algorithms on GPUs with Applications in Hierarchical Matrix Compression
Halim Boukaram, Wajih; Turkiyyah, George; Ltaief, Hatem; Keyes, David E.
2017-01-01
We present high performance implementations of the QR and the singular value decomposition of a batch of small matrices hosted on the GPU with applications in the compression of hierarchical matrices. The one-sided Jacobi algorithm is used for its simplicity and inherent parallelism as a building block for the SVD of low rank blocks using randomized methods. We implement multiple kernels based on the level of the GPU memory hierarchy in which the matrices can reside and show substantial speedups against streamed cuSOLVER SVDs. The resulting batched routine is a key component of hierarchical matrix compression, opening up opportunities to perform H-matrix arithmetic efficiently on GPUs.
Bodewig, E
1959-01-01
Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well
Accurate quantum chemical calculations
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Exploiting Data Sparsity for Large-Scale Matrix Computations
Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.
2018-01-01
Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.
Exploiting Data Sparsity for Large-Scale Matrix Computations
Akbudak, Kadir
2018-02-24
Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.
International Nuclear Information System (INIS)
Craps, Ben; Evnin, Oleg; Nguyen, Kévin
2017-01-01
Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.
Craps, Ben; Evnin, Oleg; Nguyen, Kévin
2017-02-01
Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.
Energy Technology Data Exchange (ETDEWEB)
Craps, Ben [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Evnin, Oleg [Department of Physics, Faculty of Science, Chulalongkorn University, Thanon Phayathai, Pathumwan, Bangkok 10330 (Thailand); Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Nguyen, Kévin [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium)
2017-02-08
Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.
A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering
Directory of Open Access Journals (Sweden)
Yubao Sun
2015-01-01
Full Text Available This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.
Link Prediction via Convex Nonnegative Matrix Factorization on Multiscale Blocks
Directory of Open Access Journals (Sweden)
Enming Dong
2014-01-01
Full Text Available Low rank matrices approximations have been used in link prediction for networks, which are usually global optimal methods and lack of using the local information. The block structure is a significant local feature of matrices: entities in the same block have similar values, which implies that links are more likely to be found within dense blocks. We use this insight to give a probabilistic latent variable model for finding missing links by convex nonnegative matrix factorization with block detection. The experiments show that this method gives better prediction accuracy than original method alone. Different from the original low rank matrices approximations methods for link prediction, the sparseness of solutions is in accord with the sparse property for most real complex networks. Scaling to massive size network, we use the block information mapping matrices onto distributed architectures and give a divide-and-conquer prediction method. The experiments show that it gives better results than common neighbors method when the networks have a large number of missing links.
Qian, Weixian; Zhou, Xiaojun; Lu, Yingcheng; Xu, Jiang
2015-09-15
Both the Jones and Mueller matrices encounter difficulties when physically modeling mixed materials or rough surfaces due to the complexity of light-matter interactions. To address these issues, we derived a matrix called the paths correlation matrix (PCM), which is a probabilistic mixture of Jones matrices of every light propagation path. Because PCM is related to actual light propagation paths, it is well suited for physical modeling. Experiments were performed, and the reflection PCM of a mixture of polypropylene and graphite was measured. The PCM of the mixed sample was accurately decomposed into pure polypropylene's single reflection, pure graphite's single reflection, and depolarization caused by multiple reflections, which is consistent with the theoretical derivation. Reflection parameters of rough surface can be calculated from PCM decomposition, and the results fit well with the theoretical calculations provided by the Fresnel equations. These theoretical and experimental analyses verify that PCM is an efficient way to physically model light-matter interactions.
Zhan, Xingzhi
2002-01-01
The main purpose of this monograph is to report on recent developments in the field of matrix inequalities, with emphasis on useful techniques and ingenious ideas. Among other results this book contains the affirmative solutions of eight conjectures. Many theorems unify or sharpen previous inequalities. The author's aim is to streamline the ideas in the literature. The book can be read by research workers, graduate students and advanced undergraduates.
Matrix completion-based reconstruction for undersampled magnetic resonance fingerprinting data.
Doneva, Mariya; Amthor, Thomas; Koken, Peter; Sommer, Karsten; Börnert, Peter
2017-09-01
An iterative reconstruction method for undersampled magnetic resonance fingerprinting data is presented. The method performs the reconstruction entirely in k-space and is related to low rank matrix completion methods. A low dimensional data subspace is estimated from a small number of k-space locations fully sampled in the temporal direction and used to reconstruct the missing k-space samples before MRF dictionary matching. Performing the iterations in k-space eliminates the need for applying a forward and an inverse Fourier transform in each iteration required in previously proposed iterative reconstruction methods for undersampled MRF data. A projection onto the low dimensional data subspace is performed as a matrix multiplication instead of a singular value thresholding typically used in low rank matrix completion, further reducing the computational complexity of the reconstruction. The method is theoretically described and validated in phantom and in-vivo experiments. The quality of the parameter maps can be significantly improved compared to direct matching on undersampled data. Copyright © 2017 Elsevier Inc. All rights reserved.
Bhatia, Rajendra
1997-01-01
A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...
Sparse subspace clustering for data with missing entries and high-rank matrix completion.
Fan, Jicong; Chow, Tommy W S
2017-09-01
Many methods have recently been proposed for subspace clustering, but they are often unable to handle incomplete data because of missing entries. Using matrix completion methods to recover missing entries is a common way to solve the problem. Conventional matrix completion methods require that the matrix should be of low-rank intrinsically, but most matrices are of high-rank or even full-rank in practice, especially when the number of subspaces is large. In this paper, a new method called Sparse Representation with Missing Entries and Matrix Completion is proposed to solve the problems of incomplete-data subspace clustering and high-rank matrix completion. The proposed algorithm alternately computes the matrix of sparse representation coefficients and recovers the missing entries of a data matrix. The proposed algorithm recovers missing entries through minimizing the representation coefficients, representation errors, and matrix rank. Thorough experimental study and comparative analysis based on synthetic data and natural images were conducted. The presented results demonstrate that the proposed algorithm is more effective in subspace clustering and matrix completion compared with other existing methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
Manifold regularized matrix completion for multi-label learning with ADMM.
Liu, Bin; Li, Yingming; Xu, Zenglin
2018-05-01
Multi-label learning is a common machine learning problem arising from numerous real-world applications in diverse fields, e.g, natural language processing, bioinformatics, information retrieval and so on. Among various multi-label learning methods, the matrix completion approach has been regarded as a promising approach to transductive multi-label learning. By constructing a joint matrix comprising the feature matrix and the label matrix, the missing labels of test samples are regarded as missing values of the joint matrix. With the low-rank assumption of the constructed joint matrix, the missing labels can be recovered by minimizing its rank. Despite its success, most matrix completion based approaches ignore the smoothness assumption of unlabeled data, i.e., neighboring instances should also share a similar set of labels. Thus they may under exploit the intrinsic structures of data. In addition, the matrix completion problem can be less efficient. To this end, we propose to efficiently solve the multi-label learning problem as an enhanced matrix completion model with manifold regularization, where the graph Laplacian is used to ensure the label smoothness over it. To speed up the convergence of our model, we develop an efficient iterative algorithm, which solves the resulted nuclear norm minimization problem with the alternating direction method of multipliers (ADMM). Experiments on both synthetic and real-world data have shown the promising results of the proposed approach. Copyright © 2018 Elsevier Ltd. All rights reserved.
Belitsky, A. V.
2017-10-01
The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.
Directory of Open Access Journals (Sweden)
A.V. Belitsky
2017-10-01
Full Text Available The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang–Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4 matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.
General approach for accurate resonance analysis in transformer windings
Popov, M.
2018-01-01
In this paper, resonance effects in transformer windings are thoroughly investigated and analyzed. The resonance is determined by making use of an accurate approach based on the application of the impedance matrix of a transformer winding. The method is validated by a test coil and the numerical
Matrix Factorisation-based Calibration For Air Quality Crowd-sensing
Dorffer, Clement; Puigt, Matthieu; Delmaire, Gilles; Roussel, Gilles; Rouvoy, Romain; Sagnier, Isabelle
2017-04-01
sensors share some information using the APISENSE® crowdsensing platform and we aim to calibrate the sensor responses from the data directly. For that purpose, we express the sensor readings as a low-rank matrix with missing entries and we revisit self-calibration as a Matrix Factorization (MF) problem. In our proposed framework, one factor matrix contains the calibration parameters while the other is structured by the calibration model and contains some values of the sensed phenomenon. The MF calibration approach also uses the precise measurements from ATMO—the French public institution—to drive the calibration of the mobile sensors. MF calibration can be improved using, e.g., the mean calibration parameters provided by the sensor manufacturers, or using sparse priors or a model of the physical phenomenon. All our approaches are shown to provide a better calibration accuracy than matrix-completion-based and robust-regression-based methods, even in difficult scenarios involving a lot of missing data and/or very few accurate references. When combined with a dictionary of air quality patterns, our experiments suggest that MF is not only able to perform sensor network calibration but also to provide detailed maps of air quality.
Accurate Evaluation of Quantum Integrals
Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)
1995-01-01
Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.
Multiple Kernel Learning for adaptive graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan; AbdulJabbar, Mustafa Abdulmajeed
2012-01-01
Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of non-negative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation, which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.
Exploiting Data Sparsity In Covariance Matrix Computations on Heterogeneous Systems
Charara, Ali M.
2018-05-24
Covariance matrices are ubiquitous in computational sciences, typically describing the correlation of elements of large multivariate spatial data sets. For example, covari- ance matrices are employed in climate/weather modeling for the maximum likelihood estimation to improve prediction, as well as in computational ground-based astronomy to enhance the observed image quality by filtering out noise produced by the adap- tive optics instruments and atmospheric turbulence. The structure of these covariance matrices is dense, symmetric, positive-definite, and often data-sparse, therefore, hier- archically of low-rank. This thesis investigates the performance limit of dense matrix computations (e.g., Cholesky factorization) on covariance matrix problems as the number of unknowns grows, and in the context of the aforementioned applications. We employ recursive formulations of some of the basic linear algebra subroutines (BLAS) to accelerate the covariance matrix computation further, while reducing data traffic across the memory subsystems layers. However, dealing with large data sets (i.e., covariance matrices of billions in size) can rapidly become prohibitive in memory footprint and algorithmic complexity. Most importantly, this thesis investigates the tile low-rank data format (TLR), a new compressed data structure and layout, which is valuable in exploiting data sparsity by approximating the operator. The TLR com- pressed data structure allows approximating the original problem up to user-defined numerical accuracy. This comes at the expense of dealing with tasks with much lower arithmetic intensities than traditional dense computations. In fact, this thesis con- solidates the two trends of dense and data-sparse linear algebra for HPC. Not only does the thesis leverage recursive formulations for dense Cholesky-based matrix al- gorithms, but it also implements a novel TLR-Cholesky factorization using batched linear algebra operations to increase hardware occupancy and
Batched Triangular Dense Linear Algebra Kernels for Very Small Matrix Sizes on GPUs
Charara, Ali; Keyes, David E.; Ltaief, Hatem
2017-01-01
Batched dense linear algebra kernels are becoming ubiquitous in scientific applications, ranging from tensor contractions in deep learning to data compression in hierarchical low-rank matrix approximation. Within a single API call, these kernels are capable of simultaneously launching up to thousands of similar matrix computations, removing the expensive overhead of multiple API calls while increasing the occupancy of the underlying hardware. A challenge is that for the existing hardware landscape (x86, GPUs, etc.), only a subset of the required batched operations is implemented by the vendors, with limited support for very small problem sizes. We describe the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes using single and multiple GPUs. By deploying two-sided recursive formulations, stressing the register usage, maintaining data locality, reducing threads synchronization and fusing successive kernel calls, the new batched kernels outperform existing state-of-the-art implementations.
Batched Triangular Dense Linear Algebra Kernels for Very Small Matrix Sizes on GPUs
Charara, Ali
2017-03-06
Batched dense linear algebra kernels are becoming ubiquitous in scientific applications, ranging from tensor contractions in deep learning to data compression in hierarchical low-rank matrix approximation. Within a single API call, these kernels are capable of simultaneously launching up to thousands of similar matrix computations, removing the expensive overhead of multiple API calls while increasing the occupancy of the underlying hardware. A challenge is that for the existing hardware landscape (x86, GPUs, etc.), only a subset of the required batched operations is implemented by the vendors, with limited support for very small problem sizes. We describe the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes using single and multiple GPUs. By deploying two-sided recursive formulations, stressing the register usage, maintaining data locality, reducing threads synchronization and fusing successive kernel calls, the new batched kernels outperform existing state-of-the-art implementations.
Dynamic SPECT reconstruction from few projections: a sparsity enforced matrix factorization approach
Ding, Qiaoqiao; Zan, Yunlong; Huang, Qiu; Zhang, Xiaoqun
2015-02-01
The reconstruction of dynamic images from few projection data is a challenging problem, especially when noise is present and when the dynamic images are vary fast. In this paper, we propose a variational model, sparsity enforced matrix factorization (SEMF), based on low rank matrix factorization of unknown images and enforced sparsity constraints for representing both coefficients and bases. The proposed model is solved via an alternating iterative scheme for which each subproblem is convex and involves the efficient alternating direction method of multipliers (ADMM). The convergence of the overall alternating scheme for the nonconvex problem relies upon the Kurdyka-Łojasiewicz property, recently studied by Attouch et al (2010 Math. Oper. Res. 35 438) and Attouch et al (2013 Math. Program. 137 91). Finally our proof-of-concept simulation on 2D dynamic images shows the advantage of the proposed method compared to conventional methods.
Fully Decentralized Semi-supervised Learning via Privacy-preserving Matrix Completion.
Fierimonte, Roberto; Scardapane, Simone; Uncini, Aurelio; Panella, Massimo
2016-08-26
Distributed learning refers to the problem of inferring a function when the training data are distributed among different nodes. While significant work has been done in the contexts of supervised and unsupervised learning, the intermediate case of Semi-supervised learning in the distributed setting has received less attention. In this paper, we propose an algorithm for this class of problems, by extending the framework of manifold regularization. The main component of the proposed algorithm consists of a fully distributed computation of the adjacency matrix of the training patterns. To this end, we propose a novel algorithm for low-rank distributed matrix completion, based on the framework of diffusion adaptation. Overall, the distributed Semi-supervised algorithm is efficient and scalable, and it can preserve privacy by the inclusion of flexible privacy-preserving mechanisms for similarity computation. The experimental results and comparison on a wide range of standard Semi-supervised benchmarks validate our proposal.
Towards accurate emergency response behavior
International Nuclear Information System (INIS)
Sargent, T.O.
1981-01-01
Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail
When Is Network Lasso Accurate?
Directory of Open Access Journals (Sweden)
Alexander Jung
2018-01-01
Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.
The Accurate Particle Tracer Code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...
International Nuclear Information System (INIS)
Deslattes, R.D.
1987-01-01
Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data
Efficiency criterion for teleportation via channel matrix, measurement matrix and collapsed matrix
Directory of Open Access Journals (Sweden)
Xin-Wei Zha
Full Text Available In this paper, three kinds of coefficient matrixes (channel matrix, measurement matrix, collapsed matrix associated with the pure state for teleportation are presented, the general relation among channel matrix, measurement matrix and collapsed matrix is obtained. In addition, a criterion for judging whether a state can be teleported successfully is given, depending on the relation between the number of parameter of an unknown state and the rank of the collapsed matrix. Keywords: Channel matrix, Measurement matrix, Collapsed matrix, Teleportation
Extended biorthogonal matrix polynomials
Directory of Open Access Journals (Sweden)
Ayman Shehata
2017-01-01
Full Text Available The pair of biorthogonal matrix polynomials for commutative matrices were first introduced by Varma and Tasdelen in [22]. The main aim of this paper is to extend the properties of the pair of biorthogonal matrix polynomials of Varma and Tasdelen and certain generating matrix functions, finite series, some matrix recurrence relations, several important properties of matrix differential recurrence relations, biorthogonality relations and matrix differential equation for the pair of biorthogonal matrix polynomials J(A,B n (x, k and K(A,B n (x, k are discussed. For the matrix polynomials J(A,B n (x, k, various families of bilinear and bilateral generating matrix functions are constructed in the sequel.
Accurate determination of antenna directivity
DEFF Research Database (Denmark)
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
Matrix completion by deep matrix factorization.
Fan, Jicong; Cheng, Jieyu
2018-02-01
Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.
On affine non-negative matrix factorization
DEFF Research Database (Denmark)
Laurberg, Hans; Hansen, Lars Kai
2007-01-01
We generalize the non-negative matrix factorization (NMF) generative model to incorporate an explicit offset. Multiplicative estimation algorithms are provided for the resulting sparse affine NMF model. We show that the affine model has improved uniqueness properties and leads to more accurate id...
Comparison of transition-matrix sampling procedures
DEFF Research Database (Denmark)
Yevick, D.; Reimer, M.; Tromborg, Bjarne
2009-01-01
We compare the accuracy of the multicanonical procedure with that of transition-matrix models of static and dynamic communication system properties incorporating different acceptance rules. We find that for appropriate ranges of the underlying numerical parameters, algorithmically simple yet high...... accurate procedures can be employed in place of the standard multicanonical sampling algorithm....
Accelerating Matrix-Vector Multiplication on Hierarchical Matrices Using Graphical Processing Units
Boukaram, W.
2015-03-25
Large dense matrices arise from the discretization of many physical phenomena in computational sciences. In statistics very large dense covariance matrices are used for describing random fields and processes. One can, for instance, describe distribution of dust particles in the atmosphere, concentration of mineral resources in the earth\\'s crust or uncertain permeability coefficient in reservoir modeling. When the problem size grows, storing and computing with the full dense matrix becomes prohibitively expensive both in terms of computational complexity and physical memory requirements. Fortunately, these matrices can often be approximated by a class of data sparse matrices called hierarchical matrices (H-matrices) where various sub-blocks of the matrix are approximated by low rank matrices. These matrices can be stored in memory that grows linearly with the problem size. In addition, arithmetic operations on these H-matrices, such as matrix-vector multiplication, can be completed in almost linear time. Originally the H-matrix technique was developed for the approximation of stiffness matrices coming from partial differential and integral equations. Parallelizing these arithmetic operations on the GPU has been the focus of this work and we will present work done on the matrix vector operation on the GPU using the KSPARSE library.
Low-rank sparse learning for robust visual tracking
Zhang, Tianzhu; Ghanem, Bernard; Liu, Si; Ahuja, Narendra
2012-01-01
In this paper, we propose a new particle-filter based tracking algorithm that exploits the relationship between particles (candidate targets). By representing particles as sparse linear combinations of dictionary templates, this algorithm
Alkaloid-derived molecules in low rank Argonne premium coals.
Energy Technology Data Exchange (ETDEWEB)
Winans, R. E.; Tomczyk, N. A.; Hunt, J. E.
2000-11-30
Molecules that are probably derived from alkaloids have been found in the extracts of the subbituminous and lignite Argonne Premium Coals. High resolution mass spectrometry (HRMS) and liquid chromatography mass spectrometry (LCMS) have been used to characterize pyridine and supercritical extracts. The supercritical extraction used an approach that has been successful for extracting alkaloids from natural products. The first indication that there might be these natural products in coals was the large number of molecules found containing multiple nitrogen and oxygen heteroatoms. These molecules are much less abundant in bituminous coals and absent in the higher rank coals.
Recent developments in particulate control with low-rank fuels
International Nuclear Information System (INIS)
Miller, S.J.; Laudal, D.L.
1991-01-01
Regulations appear to be focusing on fine particle emissions rather than total mass particulate emissions. There is concern that electrostatic precipitators (ESPs) may not be able to meet potentially stricter finer particle emission standards. A new development in the area of fabric filtration is the use of flue gas-conditioning agents to reduce particulate emissions and pressure drop. Theoretical analysis of the factors that control the size of a baghouse indicates that pulse-jet baghouses can be designed to operate at much higher air-to-cloth ratios than is currently employed. To help optimize performance of both ESPs and baghouses, quantitative characterization of the cohesive properties of fly ash is necessary. Appropriate methods are determination of aerated and packed porosity and measurement of tensile strength as a function of porosity
Energy and environmental (JSR) research emphasizing low-rank coal
Energy Technology Data Exchange (ETDEWEB)
Sharp, L.L.
1994-12-01
The products of plastic thermal depolymerization can be used for the manufacture of new plastics or various other hydrocarbon-based products. One thermal depolymerization development effort is ongoing at the Energy & Environmental Research Center (EERC) of the University of North Dakota, under joint sponsorship of the American Plastics Council, the 3M corporation, and the Department of Energy. Thermal depolymerization process development began at the EERC with a benchscale program that ran from 9/92 to 6/93 (1). Testing was conducted in a 1-4-lb/hr continuous fluid-bed reactor (CFBR) unit using individual virgin resins and resin blends and was intended to determine rough operating parameters and product yields and to identify product stream components. Process variables examined included temperature and bed material, with a lesser emphasis on gas fluidization velocity and feed material mix. The following work was performed: (1) a short program to determine the suitability of using CaO in a postreactor, fixed bed for chlorine remediation, (2) thermal depolymerization of postconsumer plastics, and (3) testing of industrial (3M) products and wastes to determine their suitability as feed to a thermal depolymerization process. The involvement of DOE in the development of the plastics thermal depolymerization process has helped to facilitate the transfer of coal conversion technology to a new and growing technology area -- waste conversion. These two technology areas are complementary. The application of known coal conversion technology has accelerated the development of plastics conversion technology, and findings from the plastics depolymerization process development, such as the development of chlorine remediation techniques and procedures for measurement of organically associated chlorine, can be applied to new generations of coal conversion processes.
Thermal behaviour during the pyrolysis of low rank perhydrous coals
Energy Technology Data Exchange (ETDEWEB)
Arenillas, A.; Rubiera, F.; Pis, J.J.; Cuesta, M.J.; Suarez-Ruiz, I. [Instituto Nacional del Carbon, CSIC, Apartado 73, 33080 Oviedo (Spain); Iglesias, M.J. [Area de Quimica Organica, Universidad de Almeria, Carretera de Sacramento, 04120 Almeria (Spain); Jimenez, A. [Area de Cristalografia y Mineralogia, Departamento de Geologia, Campus de Llamaquique, 33005 Oviedo (Spain)
2003-08-01
Perhydrous coals are characterised by high H/C atomic ratios and so their chemical structure is substantially modified with respect to that of conventional coals. As a result, perhydrous coals show different physico-chemical properties to common coals (i.e. higher volatile matter content, enhancement of oil/tar potential, relatively lower porosity and higher fluidity during carbonisation). However, there is little information about thermal behaviour during the pyrolysis of this type of coal. In this work, six perhydrous coals (H/C ratio between 0.83 and 1.07) were pyrolysed and analysed by simultaneous thermogravimetry/mass spectrometry. The results of this work have revealed the influence of high H/C values on the thermal behaviour of the coals studied. During pyrolysis the perhydrous coals exhibit very well defined, symmetrical peaks in the mass loss rate profiles, while normal coals usually show a broader peak. The shape of such curves suggests that in perhydrous coals fragmentation processes prevailed over condensation reactions. The high hydrogen content of perhydrous coals may stabilise the free radicals formed during heat treatment, increasing the production of light components.
A case study of PFBC for low rank coals
Energy Technology Data Exchange (ETDEWEB)
Jansson, S.A. [ABB Carbon AB, Finspong (Sweden)
1995-12-01
Pressurized Fluidized Combined-Cycle (PFBC) technology allows the efficient and environmentally friendly utilization of solid fuels for power and combined heat and power generation. With current PFBC technology, thermal efficiencies near 46%, on an LHV basis and with low condenser pressures, can be reached in condensing power plants. Further efficiency improvements to 50% or more are possible. PFBC plants are characterized by high thermal efficiency, compactness, and extremely good environmental performance. The PFBC plants which are now in operation in Sweden, the U.S. and Japan burn medium-ash, bituminous coal with sulfur contents ranging from 0.7 to 4%. A sub- bituminous {open_quotes}black lignite{close_quotes} with high levels of sulfur, ash and humidity, is used as fuel in a demonstration PFBC plant in Spain. Project discussions are underway, among others in Central and Eastern Europe, for the construction of PFBC plants which will burn lignite, oil-shale and also mixtures of coal and biomass with high efficiency and extremely low emissions. This paper will provide information about the performance data for PFBC plants when operating on a range of low grade coals and other solid fuels, and will summarize other advantages of this leading new clean coal technology.
Batched Tile Low-Rank GEMM on GPUs
Charara, Ali; Keyes, David E.; Ltaief, Hatem
2018-01-01
. In fact, chip manufacturers give a special attention to the GEMM kernel implementation since this is exactly where most of the high-performance software libraries extract the hardware performance. With the emergence of big data applications involving large
Case studies on direct liquefaction of low rank Wyoming coal
Energy Technology Data Exchange (ETDEWEB)
Adler, P.; Kramer, S.J.; Poddar, S.K. [Bechtel Corp., San Francisco, CA (United States)
1995-12-31
Previous Studies have developed process designs, costs, and economics for the direct liquefaction of Illinois No. 6 and Wyoming Black Thunder coals at mine-mouth plants. This investigation concerns two case studies related to the liquefaction of Wyoming Black Thunder coal. The first study showed that reducing the coal liquefaction reactor design pressure from 3300 to 1000 psig could reduce the crude oil equivalent price by 2.1 $/bbl provided equivalent performing catalysts can be developed. The second one showed that incentives may exist for locating a facility that liquifies Wyoming coal on the Gulf Coast because of lower construction costs and higher labor productivity. These incentives are dependent upon the relative values of the cost of shipping the coal to the Gulf Coast and the increased product revenues that may be obtained by distributing the liquid products among several nearby refineries.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-01
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
Hierarchical low-rank approximation for high dimensional approximation
Nouy, Anthony
2016-01-07
Tensor methods are among the most prominent tools for the numerical solution of high-dimensional problems where functions of multiple variables have to be approximated. Such high-dimensional approximation problems naturally arise in stochastic analysis and uncertainty quantification. In many practical situations, the approximation of high-dimensional functions is made computationally tractable by using rank-structured approximations. In this talk, we present algorithms for the approximation in hierarchical tensor format using statistical methods. Sparse representations in a given tensor format are obtained with adaptive or convex relaxation methods, with a selection of parameters using crossvalidation methods.
DEFF Research Database (Denmark)
Petersen, Kaare Brandt; Pedersen, Michael Syskind
Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices.......Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices....
Accurate Modeling of Advanced Reflectarrays
DEFF Research Database (Denmark)
Zhou, Min
to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Accurate thickness measurement of graphene
International Nuclear Information System (INIS)
Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T
2016-01-01
Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)
Farooque, Mohammad; Yuh, Chao-Yi
1996-01-01
A carbonate fuel cell matrix comprising support particles and crack attenuator particles which are made platelet in shape to increase the resistance of the matrix to through cracking. Also disclosed is a matrix having porous crack attenuator particles and a matrix whose crack attenuator particles have a thermal coefficient of expansion which is significantly different from that of the support particles, and a method of making platelet-shaped crack attenuator particles.
Matrix with Prescribed Eigenvectors
Ahmad, Faiz
2011-01-01
It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…
Indian Academy of Sciences (India)
Much of linear algebra is devoted to reducing a matrix (via similarity or unitary similarity) to another that has lots of zeros. The simplest such theorem is the Schur triangularization theorem. This says that every matrix is unitarily similar to an upper triangular matrix. Our aim here is to show that though it is very easy to prove it ...
The accurate particle tracer code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
Energy Technology Data Exchange (ETDEWEB)
Tsujimoto, Shoko.; Shin, Huidong.; Shimizu, Kazuyuki. [Kyushu Institute of Technology, Fukuoka (Japan). Department of Biochemical engineering and science; Mae, Kazuhiro.; Miura, Koichi. [Kyoto University, Kyoto (Japan). Department of Chemical Engineering
1999-03-10
Fermentation characteristics are investigated for the conversion of glycolate, acetate, formate, and malonate obtained by the oxidation of low-rank coals to poly ({beta}-hydrox butyrate) (PHB) using A. eutrophus cells. Based on the cultivation experiments using one of the organic acids as a sole carbon source, it is found that acetate is the most effectively converted to PHB. When mixed organic acids are used, formate is preferentially consumed, followed by acetate, and finally glycolate. Although malate can not be utilized, it is implied that it might change the pathway flux distributions based on the metabolic flux analysis. Namely, it shows competitive inhibition to succinate dehydrogenase so that its addition during fermentation results in flux reduction from succinate to maleic acid as well as glyoxylate flux and gluconeogenesis flux. It is also found that NADPH generated from isocitrate is preferentially utilized for the reaction from {alpha}-ketoglutarate to glutamate when NH{sub 3} concentration is high, while it is eventually used for the PHB production from acetoacetyl CoA as NH{sub 3} concentration decreases. (author)
Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.
Directory of Open Access Journals (Sweden)
Guangwei Gao
Full Text Available In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.
Parallelism in matrix computations
Gallopoulos, Efstratios; Sameh, Ahmed H
2016-01-01
This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...
International Nuclear Information System (INIS)
Strobel, E.L.
1985-01-01
Given the many conflicting experimental results, examination is made of the neutrino mass matrix in order to determine possible masses and mixings. It is assumed that the Dirac mass matrix for the electron, muon, and tau neutrinos is similar in form to those of the quarks and charged leptons, and that the smallness of the observed neutrino masses results from the Gell-Mann-Ramond-Slansky mechanism. Analysis of masses and mixings for the neutrinos is performed using general structures for the Majorana mass matrix. It is shown that if certain tentative experimental results concerning the neutrino masses and mixing angles are confirmed, significant limitations may be placed on the Majorana mass matrix. The most satisfactory simple assumption concerning the Majorana mass matrix is that it is approximately proportional to the Dirac mass matrix. A very recent experimental neutrino mass result and its implications are discussed. Some general properties of matrices with structure similar to the Dirac mass matrices are discussed
Demoor, M; Maneix, L; Ollitrault, D; Legendre, F; Duval, E; Claus, S; Mallein-Gerin, F; Moslemi, S; Boumediene, K; Galera, P
2012-06-01
Since the emergence in the 1990s of the autologous chondrocytes transplantation (ACT) in the treatment of cartilage defects, the technique, corresponding initially to implantation of chondrocytes, previously isolated and amplified in vitro, under a periosteal membrane, has greatly evolved. Indeed, the first generations of ACT showed their limits, with in particular the dedifferentiation of chondrocytes during the monolayer culture, inducing the synthesis of fibroblastic collagens, notably type I collagen to the detriment of type II collagen. Beyond the clinical aspect with its encouraging results, new biological substitutes must be tested to obtain a hyaline neocartilage. Therefore, the use of differentiated chondrocytes phenotypically stabilized is essential for the success of ACT at medium and long-term. That is why researchers try now to develop more reliable culture techniques, using among others, new types of biomaterials and molecules known for their chondrogenic activity, giving rise to the 4th generation of ACT. Other sources of cells, being able to follow chondrogenesis program, are also studied. The success of the cartilage regenerative medicine is based on the phenotypic status of the chondrocyte and on one of its essential component of the cartilage, type II collagen, the expression of which should be supported without induction of type I collagen. The knowledge accumulated by the scientific community and the experience of the clinicians will certainly allow to relief this technological challenge, which influence besides, the validation of such biological substitutes by the sanitary authorities. Copyright © 2012 Elsevier Masson SAS. All rights reserved.
DEFF Research Database (Denmark)
Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.
2013-01-01
For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...
DEFF Research Database (Denmark)
Schneider, Jesper Wiborg; Borlund, Pia
2007-01-01
The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such c...
International Nuclear Information System (INIS)
Markowski, Adam S.; Mannan, M. Sam
2008-01-01
A risk matrix is a mechanism to characterize and rank process risks that are typically identified through one or more multifunctional reviews (e.g., process hazard analysis, audits, or incident investigation). This paper describes a procedure for developing a fuzzy risk matrix that may be used for emerging fuzzy logic applications in different safety analyses (e.g., LOPA). The fuzzification of frequency and severity of the consequences of the incident scenario are described which are basic inputs for fuzzy risk matrix. Subsequently using different design of risk matrix, fuzzy rules are established enabling the development of fuzzy risk matrices. Three types of fuzzy risk matrix have been developed (low-cost, standard, and high-cost), and using a distillation column case study, the effect of the design on final defuzzified risk index is demonstrated
International Nuclear Information System (INIS)
Baron, Jorge H.; Rivera, S.S.
2000-01-01
The so-called vulnerability matrix is used in the evaluation part of the probabilistic safety assessment for a nuclear power plant, during the containment event trees calculations. This matrix is established from what is knows as Numerical Categories for Engineering Judgement. This matrix is usually established with numerical values obtained with traditional arithmetic using the set theory. The representation of this matrix with fuzzy numbers is much more adequate, due to the fact that the Numerical Categories for Engineering Judgement are better represented with linguistic variables, such as 'highly probable', 'probable', 'impossible', etc. In the present paper a methodology to obtain a Fuzzy Vulnerability Matrix is presented, starting from the recommendations on the Numerical Categories for Engineering Judgement. (author)
Wideband DOA Estimation through Projection Matrix Interpolation
Selva, J.
2017-01-01
This paper presents a method to reduce the complexity of the deterministic maximum likelihood (DML) estimator in the wideband direction-of-arrival (WDOA) problem, which is based on interpolating the array projection matrix in the temporal frequency variable. It is shown that an accurate interpolator like Chebyshev's is able to produce DML cost functions comprising just a few narrowband-like summands. Actually, the number of such summands is far smaller (roughly by factor ten in the numerical ...
rCUR: an R package for CUR matrix decomposition
Directory of Open Access Journals (Sweden)
Bodor András
2012-05-01
Full Text Available Abstract Background Many methods for dimensionality reduction of large data sets such as those generated in microarray studies boil down to the Singular Value Decomposition (SVD. Although singular vectors associated with the largest singular values have strong optimality properties and can often be quite useful as a tool to summarize the data, they are linear combinations of up to all of the data points, and thus it is typically quite hard to interpret those vectors in terms of the application domain from which the data are drawn. Recently, an alternative dimensionality reduction paradigm, CUR matrix decompositions, has been proposed to address this problem and has been applied to genetic and internet data. CUR decompositions are low-rank matrix decompositions that are explicitly expressed in terms of a small number of actual columns and/or actual rows of the data matrix. Since they are constructed from actual data elements, CUR decompositions are interpretable by practitioners of the field from which the data are drawn. Results We present an implementation to perform CUR matrix decompositions, in the form of a freely available, open source R-package called rCUR. This package will help users to perform CUR-based analysis on large-scale data, such as those obtained from different high-throughput technologies, in an interactive and exploratory manner. We show two examples that illustrate how CUR-based techniques make it possible to reduce significantly the number of probes, while at the same time maintaining major trends in data and keeping the same classification accuracy. Conclusions The package rCUR provides functions for the users to perform CUR-based matrix decompositions in the R environment. In gene expression studies, it gives an additional way of analysis of differential expression and discriminant gene selection based on the use of statistical leverage scores. These scores, which have been used historically in diagnostic regression
Matrix Metalloproteinase Enzyme Family
Directory of Open Access Journals (Sweden)
Ozlem Goruroglu Ozturk
2013-04-01
Full Text Available Matrix metalloproteinases play an important role in many biological processes such as embriogenesis, tissue remodeling, wound healing, and angiogenesis, and in some pathological conditions such as atherosclerosis, arthritis and cancer. Currently, 24 genes have been identified in humans that encode different groups of matrix metalloproteinase enzymes. This review discuss the members of the matrix metalloproteinase family and their substrate specificity, structure, function and the regulation of their enzyme activity by tissue inhibitors. [Archives Medical Review Journal 2013; 22(2.000: 209-220
Matrix groups for undergraduates
Tapp, Kristopher
2005-01-01
Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, and maximal tori.
Eves, Howard
1980-01-01
The usefulness of matrix theory as a tool in disciplines ranging from quantum mechanics to psychometrics is widely recognized, and courses in matrix theory are increasingly a standard part of the undergraduate curriculum.This outstanding text offers an unusual introduction to matrix theory at the undergraduate level. Unlike most texts dealing with the topic, which tend to remain on an abstract level, Dr. Eves' book employs a concrete elementary approach, avoiding abstraction until the final chapter. This practical method renders the text especially accessible to students of physics, engineeri
Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT
Directory of Open Access Journals (Sweden)
Thu L. N. Nguyen
2016-05-01
Full Text Available Localization in wireless sensor networks (WSNs is one of the primary functions of the intelligent Internet of Things (IoT that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach.
Czerwinski, Michael; Spence, Jason R
2017-01-05
Recently in Nature, Gjorevski et al. (2016) describe a fully defined synthetic hydrogel that mimics the extracellular matrix to support in vitro growth of intestinal stem cells and organoids. The hydrogel allows exquisite control over the chemical and physical in vitro niche and enables identification of regulatory properties of the matrix. Copyright © 2017 Elsevier Inc. All rights reserved.
The Matrix Organization Revisited
DEFF Research Database (Denmark)
Gattiker, Urs E.; Ulhøi, John Parm
1999-01-01
This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively).......This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively)....
Koo, H.; Falsetta, M.L.; Klein, M.I.
2013-01-01
Many infectious diseases in humans are caused or exacerbated by biofilms. Dental caries is a prime example of a biofilm-dependent disease, resulting from interactions of microorganisms, host factors, and diet (sugars), which modulate the dynamic formation of biofilms on tooth surfaces. All biofilms have a microbial-derived extracellular matrix as an essential constituent. The exopolysaccharides formed through interactions between sucrose- (and starch-) and Streptococcus mutans-derived exoenzymes present in the pellicle and on microbial surfaces (including non-mutans) provide binding sites for cariogenic and other organisms. The polymers formed in situ enmesh the microorganisms while forming a matrix facilitating the assembly of three-dimensional (3D) multicellular structures that encompass a series of microenvironments and are firmly attached to teeth. The metabolic activity of microbes embedded in this exopolysaccharide-rich and diffusion-limiting matrix leads to acidification of the milieu and, eventually, acid-dissolution of enamel. Here, we discuss recent advances concerning spatio-temporal development of the exopolysaccharide matrix and its essential role in the pathogenesis of dental caries. We focus on how the matrix serves as a 3D scaffold for biofilm assembly while creating spatial heterogeneities and low-pH microenvironments/niches. Further understanding on how the matrix modulates microbial activity and virulence expression could lead to new approaches to control cariogenic biofilms. PMID:24045647
More accurate thermal neutron coincidence counting technique
International Nuclear Information System (INIS)
Baron, N.
1978-01-01
Using passive thermal neutron coincidence counting techniques, the accuracy of nondestructive assays of fertile material can be improved significantly using a two-ring detector. It was shown how the use of a function of the coincidence count rate ring-ratio can provide a detector response rate that is independent of variations in neutron detection efficiency caused by varying sample moderation. Furthermore, the correction for multiplication caused by SF- and (α,n)-neutrons is shown to be separable into the product of a function of the effective mass of 240 Pu (plutonium correction) and a function of the (α,n) reaction probability (matrix correction). The matrix correction is described by a function of the singles count rate ring-ratio. This correction factor is empirically observed to be identical for any combination of PuO 2 powder and matrix materials SiO 2 and MgO because of the similar relation of the (α,n)-Q value and (α,n)-reaction cross section among these matrix nuclei. However the matrix correction expression is expected to be different for matrix materials such as Na, Al, and/or Li. Nevertheless, it should be recognized that for comparison measurements among samples of similar matrix content, it is expected that some function of the singles count rate ring-ratio can be defined to account for variations in the matrix correction due to differences in the intimacy of mixture among the samples. Furthermore the magnitude of this singles count rate ring-ratio serves to identify the contaminant generating the (α,n)-neutrons. Such information is useful in process control
Reducing dose calculation time for accurate iterative IMRT planning
International Nuclear Information System (INIS)
Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe
2002-01-01
A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation
Bhatia, Rajendra
2013-01-01
This book is an outcome of the Indo-French Workshop on Matrix Information Geometries (MIG): Applications in Sensor and Cognitive Systems Engineering, which was held in Ecole Polytechnique and Thales Research and Technology Center, Palaiseau, France, in February 23-25, 2011. The workshop was generously funded by the Indo-French Centre for the Promotion of Advanced Research (IFCPAR). During the event, 22 renowned invited french or indian speakers gave lectures on their areas of expertise within the field of matrix analysis or processing. From these talks, a total of 17 original contribution or state-of-the-art chapters have been assembled in this volume. All articles were thoroughly peer-reviewed and improved, according to the suggestions of the international referees. The 17 contributions presented are organized in three parts: (1) State-of-the-art surveys & original matrix theory work, (2) Advanced matrix theory for radar processing, and (3) Matrix-based signal processing applications.
Praeger, Cheryl; Tao, Terence
2018-01-01
MATRIX is Australia’s international, residential mathematical research institute. It facilitates new collaborations and mathematical advances through intensive residential research programs, each lasting 1-4 weeks. This book is a scientific record of the five programs held at MATRIX in its first year, 2016: Higher Structures in Geometry and Physics (Chapters 1-5 and 18-21); Winter of Disconnectedness (Chapter 6 and 22-26); Approximation and Optimisation (Chapters 7-8); Refining C*-Algebraic Invariants for Dynamics using KK-theory (Chapters 9-13); Interactions between Topological Recursion, Modularity, Quantum Invariants and Low-dimensional Topology (Chapters 14-17 and 27). The MATRIX Scientific Committee selected these programs based on their scientific excellence and the participation rate of high-profile international participants. Each program included ample unstructured time to encourage collaborative research; some of the longer programs also included an embedded conference or lecture series. The artic...
Energy Technology Data Exchange (ETDEWEB)
Pan, Feng [Los Alamos National Laboratory; Kasiviswanathan, Shiva [Los Alamos National Laboratory
2010-01-01
In the matrix interdiction problem, a real-valued matrix and an integer k is given. The objective is to remove k columns such that the sum over all rows of the maximum entry in each row is minimized. This combinatorial problem is closely related to bipartite network interdiction problem which can be applied to prioritize the border checkpoints in order to minimize the probability that an adversary can successfully cross the border. After introducing the matrix interdiction problem, we will prove the problem is NP-hard, and even NP-hard to approximate with an additive n{gamma} factor for a fixed constant {gamma}. We also present an algorithm for this problem that achieves a factor of (n-k) mUltiplicative approximation ratio.
DEFF Research Database (Denmark)
Frandsen, Gudmund Skovbjerg; Frandsen, Peter Frands
2009-01-01
We consider maintaining information about the rank of a matrix under changes of the entries. For n×n matrices, we show an upper bound of O(n1.575) arithmetic operations and a lower bound of Ω(n) arithmetic operations per element change. The upper bound is valid when changing up to O(n0.575) entries...... in a single column of the matrix. We also give an algorithm that maintains the rank using O(n2) arithmetic operations per rank one update. These bounds appear to be the first nontrivial bounds for the problem. The upper bounds are valid for arbitrary fields, whereas the lower bound is valid for algebraically...... closed fields. The upper bound for element updates uses fast rectangular matrix multiplication, and the lower bound involves further development of an earlier technique for proving lower bounds for dynamic computation of rational functions....
Pérez López, César
2014-01-01
MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. MATLAB Matrix Algebra introduces you to the MATLAB language with practical hands-on instructions and results, allowing you to quickly achieve your goals. Starting with a look at symbolic and numeric variables, with an emphasis on vector and matrix variables, you will go on to examine functions and operations that support vectors and matrices as arguments, including those based on analytic parent functions. Computational methods for finding eigenvalues and eigenvectors of matrices are detailed, leading to various matrix decompositions. Applications such as change of bases, the classification of quadratic forms and ...
Hohn, Franz E
2012-01-01
This complete and coherent exposition, complemented by numerous illustrative examples, offers readers a text that can teach by itself. Fully rigorous in its treatment, it offers a mathematically sound sequencing of topics. The work starts with the most basic laws of matrix algebra and progresses to the sweep-out process for obtaining the complete solution of any given system of linear equations - homogeneous or nonhomogeneous - and the role of matrix algebra in the presentation of useful geometric ideas, techniques, and terminology.Other subjects include the complete treatment of the structur
International Nuclear Information System (INIS)
Brown, T.W.
2010-11-01
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Brown, T.W.
2010-11-15
The same complex matrix model calculates both tachyon scattering for the c=1 non-critical string at the self-dual radius and certain correlation functions of half-BPS operators in N=4 super- Yang-Mills. It is dual to another complex matrix model where the couplings of the first model are encoded in the Kontsevich-like variables of the second. The duality between the theories is mirrored by the duality of their Feynman diagrams. Analogously to the Hermitian Kontsevich- Penner model, the correlation functions of the second model can be written as sums over discrete points in subspaces of the moduli space of punctured Riemann surfaces. (orig.)
Accurate and efficient spin integration for particle accelerators
International Nuclear Information System (INIS)
Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.
2015-01-01
Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.
Accurate and efficient spin integration for particle accelerators
Energy Technology Data Exchange (ETDEWEB)
Abell, Dan T.; Meiser, Dominic [Tech-X Corporation, Boulder, CO (United States); Ranjbar, Vahid H. [Brookhaven National Laboratory, Upton, NY (United States); Barber, Desmond P. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)
2015-01-15
Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.
Accurate and efficient spin integration for particle accelerators
Directory of Open Access Journals (Sweden)
Dan T. Abell
2015-02-01
Full Text Available Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.
Equipment upgrade - Accurate positioning of ion chambers
International Nuclear Information System (INIS)
Doane, Harry J.; Nelson, George W.
1990-01-01
Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described
The controversial nuclear matrix: a balanced point of view.
Martelli, A M; Falcieri, E; Zweyer, M; Bortul, R; Tabellini, G; Cappellini, A; Cocco, L; Manzoli, L
2002-10-01
The nuclear matrix is defined as the residual framework after the removal of the nuclear envelope, chromatin, and soluble components by sequential extractions. According to several investigators the nuclear matrix provides the structural basis for intranuclear order. However, the existence itself and the nature of this structure is still uncertain. Although the techniques used for the visualization of the nuclear matrix have improved over the years, it is still unclear to what extent the isolated nuclear matrix corresponds to an in vivo existing structure. Therefore, considerable skepticism continues to surround the nuclear matrix fraction as an accurate representation of the situation in living cells. Here, we summarize the experimental evidence in favor of, or against, the presence of a diffuse nucleoskeleton as a facilitating organizational nonchromatin structure of the nucleus.
Accurate computer simulation of a drift chamber
International Nuclear Information System (INIS)
Killian, T.J.
1980-01-01
A general purpose program for drift chamber studies is described. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. Results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR
Accurate computer simulation of a drift chamber
Killian, T J
1980-01-01
The author describes a general purpose program for drift chamber studies. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. The results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR. (1 refs).
Accurate mass and velocity functions of dark matter haloes
Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly
2017-08-01
N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z publicly available in the Skies and Universes data base.
Mepham, B.; Kaiser, M.; Thorstensen, E.; Tomkins, S.; Millar, K.
2006-01-01
The ethical matrix is a conceptual tool designed to help decision-makers (as individuals or working in groups) reach sound judgements or decisions about the ethical acceptability and/or optimal regulatory controls for existing or prospective technologies in the field of food and agriculture.
Mitjana, Margarida
2018-01-01
This book contains the notes of the lectures delivered at an Advanced Course on Combinatorial Matrix Theory held at Centre de Recerca Matemàtica (CRM) in Barcelona. These notes correspond to five series of lectures. The first series is dedicated to the study of several matrix classes defined combinatorially, and was delivered by Richard A. Brualdi. The second one, given by Pauline van den Driessche, is concerned with the study of spectral properties of matrices with a given sign pattern. Dragan Stevanović delivered the third one, devoted to describing the spectral radius of a graph as a tool to provide bounds of parameters related with properties of a graph. The fourth lecture was delivered by Stephen Kirkland and is dedicated to the applications of the Group Inverse of the Laplacian matrix. The last one, given by Ángeles Carmona, focuses on boundary value problems on finite networks with special in-depth on the M-matrix inverse problem.
Visualizing Matrix Multiplication
Daugulis, Peteris; Sondore, Anita
2018-01-01
Efficient visualizations of computational algorithms are important tools for students, educators, and researchers. In this article, we point out an innovative visualization technique for matrix multiplication. This method differs from the standard, formal approach by using block matrices to make computations more visual. We find this method a…
DEFF Research Database (Denmark)
Jørnø, Rasmus Leth Vergmann; Gynther, Karsten; Christensen, Ove
2014-01-01
useful information, we question whether the axis of time and space comprising the matrix pertains to relevant defining properties of the tools, technology or learning environments to which they are applied. Subsequently we offer an example of an Adobe Connect e-learning session as an illustration...
Deformation, Failure, and Fatigue Life of SiC/Ti-15-3 Laminates Accurately Predicted by MAC/GMC
Bednarcyk, Brett A.; Arnold, Steven M.
2002-01-01
NASA Glenn Research Center's Micromechanics Analysis Code with Generalized Method of Cells (MAC/GMC) (ref.1) has been extended to enable fully coupled macro-micro deformation, failure, and fatigue life predictions for advanced metal matrix, ceramic matrix, and polymer matrix composites. Because of the multiaxial nature of the code's underlying micromechanics model, GMC--which allows the incorporation of complex local inelastic constitutive models--MAC/GMC finds its most important application in metal matrix composites, like the SiC/Ti-15-3 composite examined here. Furthermore, since GMC predicts the microscale fields within each constituent of the composite material, submodels for local effects such as fiber breakage, interfacial debonding, and matrix fatigue damage can and have been built into MAC/GMC. The present application of MAC/GMC highlights the combination of these features, which has enabled the accurate modeling of the deformation, failure, and life of titanium matrix composites.
Half a century of "the nuclear matrix".
Pederson, T
2000-03-01
A cell fraction that would today be termed "the nuclear matrix" was first described and patented in 1948 by Russian investigators. In 1974 this fraction was rediscovered and promoted as a fundamental organizing principle of eukaryotic gene expression. Yet, convincing evidence for this functional role of the nuclear matrix has been elusive and has recently been further challenged. What do we really know about the nonchromatin elements (if any) of internal nuclear structure? Are there objective reasons (as opposed to thinly veiled disdain) to question experiments that use harsh nuclear extraction steps and precipitation-prone conditions? Are the known biophysical properties of the nucleoplasm in vivo consistent with the existence of an extensive network of anastomosing filaments coursing dendritically throughout the interchromatin space? To what extent may the genome itself contribute information for its own quarternary structure in the interphase nucleus? These questions and recent work that bears on the mystique of the nuclear matrix are addressed in this essay. The degree to which gene expression literally depends on nonchromatin nuclear structure as a facilitating organizational format remains an intriguing but unsolved issue in eukaryotic cell biology, and considerable skepticism continues to surround the nuclear matrix fraction as an accurate representation of the in vivo situation.
System Matrix Analysis for Computed Tomography Imaging
Flores, Liubov; Vidal, Vicent; Verdú, Gumersindo
2015-01-01
In practical applications of computed tomography imaging (CT), it is often the case that the set of projection data is incomplete owing to the physical conditions of the data acquisition process. On the other hand, the high radiation dose imposed on patients is also undesired. These issues demand that high quality CT images can be reconstructed from limited projection data. For this reason, iterative methods of image reconstruction have become a topic of increased research interest. Several algorithms have been proposed for few-view CT. We consider that the accurate solution of the reconstruction problem also depends on the system matrix that simulates the scanning process. In this work, we analyze the application of the Siddon method to generate elements of the matrix and we present results based on real projection data. PMID:26575482
International Nuclear Information System (INIS)
Sasakawa, T.; Okuno, H.; Ishikawa, S.; Sawada, T.
1982-01-01
The off-shell t matrix is expressed as a sum of one nonseparable and one separable terms so that it is useful for applications to more-than-two body problems. All poles are involved in this one separable term. Both the nonseparable and the separable terms of the kernel G 0 t are regular at the origin. The nonseparable term of this kernel vanishes at large distances, while the separable term behaves asymptotically as the spherical Hankel function. These properties make our expression free from defects inherent in the Jost or the K-matrix expressions, and many applications are anticipated. As the application, a compact expression of the many-level formula is presented. Also the application is suggested to the breakup threebody problem based on the Faddeev equation. It is demonstrated that the breakup amplitude is expressed in a simple and physically interesting form and we can calculate it in coordinate space
Using Population Matrix Modeling to Predict AEGIS Fire Controlmen Community Structure
National Research Council Canada - National Science Library
McKeon, Thomas J
2007-01-01
.... A Population Matrix with Markov properties was used to develop the AEGIS FC aging model. The goal of this model was to provide an accurate predication of the future AEGIS FC community structure based upon variables...
International Nuclear Information System (INIS)
Raju Viswanathan, R.
1991-09-01
We study examples of one dimensional matrix models whose potentials possess an energy spectrum that can be explicitly determined. This allows for an exact solution in the continuum limit. Specifically, step-like potentials and the Morse potential are considered. The step-like potentials show no scaling behaviour and the Morse potential (which corresponds to a γ = -1 model) has the interesting feature that there are no quantum corrections to the scaling behaviour in the continuum limit. (author). 5 refs
Brenner, Barbara; Schlegelmilch, Bodo B.; Ambos, Björn
2013-01-01
This case describes how Nike, a consumer goods company with an ever expanding portfolio and a tremendous brand value, manages the tradeoff between local responsiveness and global integration. In particular, the case highlights Nike's organizational structure that consists of a global matrix organization that is replicated at a regional level for the European market. While this organizational structure allows Nike to respond to local consumer tastes it also ensures that the company benefits f...
Wilkinson, Michael; Grant, John
2018-03-01
We consider a stochastic process in which independent identically distributed random matrices are multiplied and where the Lyapunov exponent of the product is positive. We continue multiplying the random matrices as long as the norm, ɛ, of the product is less than unity. If the norm is greater than unity we reset the matrix to a multiple of the identity and then continue the multiplication. We address the problem of determining the probability density function of the norm, \
Dijkgraaf, R; Verlinde, Herman L
1997-01-01
Via compactification on a circle, the matrix model of M-theory proposed by Banks et al suggests a concrete identification between the large N limit of two-dimensional N=8 supersymmetric Yang-Mills theory and type IIA string theory. In this paper we collect evidence that supports this identification. We explicitly identify the perturbative string states and their interactions, and describe the appearance of D-particle and D-membrane states.
Matrix groups for undergraduates
Tapp, Kristopher
2016-01-01
Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe the basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, maximal tori, homogeneous spaces, and roots. This second edition includes two new chapters that allow for an easier transition to the general theory of Lie groups. From reviews of the First Edition: This book could be used as an excellent textbook for a one semester course at university and it will prepare students for a graduate course on Lie groups, Lie algebras, etc. … The book combines an intuitive style of writing w...
Extracellular matrix structure.
Theocharis, Achilleas D; Skandalis, Spyros S; Gialeli, Chrysostomi; Karamanos, Nikos K
2016-02-01
Extracellular matrix (ECM) is a non-cellular three-dimensional macromolecular network composed of collagens, proteoglycans/glycosaminoglycans, elastin, fibronectin, laminins, and several other glycoproteins. Matrix components bind each other as well as cell adhesion receptors forming a complex network into which cells reside in all tissues and organs. Cell surface receptors transduce signals into cells from ECM, which regulate diverse cellular functions, such as survival, growth, migration, and differentiation, and are vital for maintaining normal homeostasis. ECM is a highly dynamic structural network that continuously undergoes remodeling mediated by several matrix-degrading enzymes during normal and pathological conditions. Deregulation of ECM composition and structure is associated with the development and progression of several pathologic conditions. This article emphasizes in the complex ECM structure as to provide a better understanding of its dynamic structural and functional multipotency. Where relevant, the implication of the various families of ECM macromolecules in health and disease is also presented. Copyright © 2015 Elsevier B.V. All rights reserved.
Dominquez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne
2013-01-01
Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.
More accurate picture of human body organs
International Nuclear Information System (INIS)
Kolar, J.
1985-01-01
Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)
Fast and accurate methods for phylogenomic analyses
Directory of Open Access Journals (Sweden)
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
Accurate activity recognition in a home setting
van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.
2008-01-01
A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its
Highly accurate surface maps from profilometer measurements
Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.
2013-04-01
Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.
Standard Errors for Matrix Correlations.
Ogasawara, Haruhiko
1999-01-01
Derives the asymptotic standard errors and intercorrelations for several matrix correlations assuming multivariate normality for manifest variables and derives the asymptotic standard errors of the matrix correlations for two factor-loading matrices. (SLD)
Stage-structured matrix models for organisms with non-geometric development times
Andrew Birt; Richard M. Feldman; David M. Cairns; Robert N. Coulson; Maria Tchakerian; Weimin Xi; James M. Guldin
2009-01-01
Matrix models have been used to model population growth of organisms for many decades. They are popular because of both their conceptual simplicity and their computational efficiency. For some types of organisms they are relatively accurate in predicting population growth; however, for others the matrix approach does not adequately model...
The cellulose resource matrix.
Keijsers, Edwin R P; Yılmaz, Gülden; van Dam, Jan E G
2013-03-01
The emerging biobased economy is causing shifts from mineral fossil oil based resources towards renewable resources. Because of market mechanisms, current and new industries utilising renewable commodities, will attempt to secure their supply of resources. Cellulose is among these commodities, where large scale competition can be expected and already is observed for the traditional industries such as the paper industry. Cellulose and lignocellulosic raw materials (like wood and non-wood fibre crops) are being utilised in many industrial sectors. Due to the initiated transition towards biobased economy, these raw materials are intensively investigated also for new applications such as 2nd generation biofuels and 'green' chemicals and materials production (Clark, 2007; Lange, 2007; Petrus & Noordermeer, 2006; Ragauskas et al., 2006; Regalbuto, 2009). As lignocellulosic raw materials are available in variable quantities and qualities, unnecessary competition can be avoided via the choice of suitable raw materials for a target application. For example, utilisation of cellulose as carbohydrate source for ethanol production (Kabir Kazi et al., 2010) avoids the discussed competition with easier digestible carbohydrates (sugars, starch) deprived from the food supply chain. Also for cellulose use as a biopolymer several different competing markets can be distinguished. It is clear that these applications and markets will be influenced by large volume shifts. The world will have to reckon with the increase of competition and feedstock shortage (land use/biodiversity) (van Dam, de Klerk-Engels, Struik, & Rabbinge, 2005). It is of interest - in the context of sustainable development of the bioeconomy - to categorize the already available and emerging lignocellulosic resources in a matrix structure. When composing such "cellulose resource matrix" attention should be given to the quality aspects as well as to the available quantities and practical possibilities of processing the
Deift, Percy
2009-01-01
This book features a unified derivation of the mathematical theory of the three classical types of invariant random matrix ensembles-orthogonal, unitary, and symplectic. The authors follow the approach of Tracy and Widom, but the exposition here contains a substantial amount of additional material, in particular, facts from functional analysis and the theory of Pfaffians. The main result in the book is a proof of universality for orthogonal and symplectic ensembles corresponding to generalized Gaussian type weights following the authors' prior work. New, quantitative error estimates are derive
Eisenman, Richard L
2005-01-01
This outstanding text and reference applies matrix ideas to vector methods, using physical ideas to illustrate and motivate mathematical concepts but employing a mathematical continuity of development rather than a physical approach. The author, who taught at the U.S. Air Force Academy, dispenses with the artificial barrier between vectors and matrices--and more generally, between pure and applied mathematics.Motivated examples introduce each idea, with interpretations of physical, algebraic, and geometric contexts, in addition to generalizations to theorems that reflect the essential structur
Directory of Open Access Journals (Sweden)
Abdelhakim Chillali
2017-05-01
Full Text Available In classical cryptography, the Hill cipher is a polygraphic substitution cipher based on linear algebra. In this work, we proposed a new problem applicable to the public key cryptography, based on the Matrices, called “Matrix discrete logarithm problem”, it uses certain elements formed by matrices whose coefficients are elements in a finite field. We have constructed an abelian group and, for the cryptographic part in this unreliable group, we then perform the computation corresponding to the algebraic equations, Returning the encrypted result to a receiver. Upon receipt of the result, the receiver can retrieve the sender’s clear message by performing the inverse calculation.
Matrix string partition function
Kostov, Ivan K; Kostov, Ivan K.; Vanhove, Pierre
1998-01-01
We evaluate quasiclassically the Ramond partition function of Euclidean D=10 U(N) super Yang-Mills theory reduced to a two-dimensional torus. The result can be interpreted in terms of free strings wrapping the space-time torus, as expected from the point of view of Matrix string theory. We demonstrate that, when extrapolated to the ultraviolet limit (small area of the torus), the quasiclassical expressions reproduce exactly the recently obtained expression for the partition of the completely reduced SYM theory, including the overall numerical factor. This is an evidence that our quasiclassical calculation might be exact.
Accurate guitar tuning by cochlear implant musicians.
Directory of Open Access Journals (Sweden)
Thomas Lu
Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.
Accurate estimation of indoor travel times
DEFF Research Database (Denmark)
Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan
2014-01-01
The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...
Matrix algebra for linear models
Gruber, Marvin H J
2013-01-01
Matrix methods have evolved from a tool for expressing statistical problems to an indispensable part of the development, understanding, and use of various types of complex statistical analyses. This evolution has made matrix methods a vital part of statistical education. Traditionally, matrix methods are taught in courses on everything from regression analysis to stochastic processes, thus creating a fractured view of the topic. Matrix Algebra for Linear Models offers readers a unique, unified view of matrix analysis theory (where and when necessary), methods, and their applications. Written f
On accurate determination of contact angle
Concus, P.; Finn, R.
1992-01-01
Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.
Software Estimation: Developing an Accurate, Reliable Method
2011-08-01
based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets
Highly Accurate Prediction of Jobs Runtime Classes
Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi
2016-01-01
Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...
Accurate multiplicity scaling in isotopically conjugate reactions
International Nuclear Information System (INIS)
Golokhvastov, A.I.
1989-01-01
The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs
Mental models accurately predict emotion transitions.
Thornton, Mark A; Tamir, Diana I
2017-06-06
Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.
Mental models accurately predict emotion transitions
Thornton, Mark A.; Tamir, Diana I.
2017-01-01
Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373
Matrix product states for lattice field theories
Energy Technology Data Exchange (ETDEWEB)
Banuls, M.C.; Cirac, J.I. [Max-Planck-Institut fuer Quantenoptik (MPQ), Garching (Germany); Cichy, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Poznan Univ. (Poland). Faculty of Physics; Jansen, K. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Saito, H. [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Tsukuba Univ., Ibaraki (Japan). Graduate School of Pure and Applied Sciences
2013-10-15
The term Tensor Network States (TNS) refers to a number of families of states that represent different ansaetze for the efficient description of the state of a quantum many-body system. Matrix Product States (MPS) are one particular case of TNS, and have become the most precise tool for the numerical study of one dimensional quantum many-body systems, as the basis of the Density Matrix Renormalization Group method. Lattice Gauge Theories (LGT), in their Hamiltonian version, offer a challenging scenario for these techniques. While the dimensions and sizes of the systems amenable to TNS studies are still far from those achievable by 4-dimensional LGT tools, Tensor Networks can be readily used for problems which more standard techniques, such as Markov chain Monte Carlo simulations, cannot easily tackle. Examples of such problems are the presence of a chemical potential or out-of-equilibrium dynamics. We have explored the performance of Matrix Product States in the case of the Schwinger model, as a widely used testbench for lattice techniques. Using finite-size, open boundary MPS, we are able to determine the low energy states of the model in a fully non-perturbativemanner. The precision achieved by the method allows for accurate finite size and continuum limit extrapolations of the ground state energy, but also of the chiral condensate and the mass gaps, thus showing the feasibility of these techniques for gauge theory problems.
Characterization of supercapacitors matrix
Energy Technology Data Exchange (ETDEWEB)
Sakka, Monzer Al, E-mail: Monzer.Al.Sakka@vub.ac.b [Vrije Universiteit Brussel, pleinlaan 2, B-1050 Brussels (Belgium); FEMTO-ST Institute, ENISYS Department, FCLAB, UFC-UTBM, bat.F, 90010 Belfort (France); Gualous, Hamid, E-mail: Hamid.Gualous@unicaen.f [Laboratoire LUSAC, Universite de Caen Basse Normandie, Rue Louis Aragon - BP 78, 50130 Cherbourg-Octeville (France); Van Mierlo, Joeri [Vrije Universiteit Brussel, pleinlaan 2, B-1050 Brussels (Belgium)
2010-10-30
This paper treats supercapacitors matrix characterization. In order to cut off transient power peaks and to compensate for the intrinsic limitations in embedded sources, the use of supercapacitors as a storage system is quite suitable, because of their appropriate electrical characteristics (huge capacitance, small series resistance, high specific energy, high specific power), direct storage (energy ready for use), and easy control by power electronic conversion. This use requires supercapacitors modules where several cells connected in serial and/or in parallel, thus a bypass system to balance the charging or the discharging of supercapacitors is required. In the matrix of supercapacitors, six elements of three parallel BCAP0350 supercapacitors in serial connections have been considered. This topology permits to reduce the number of the bypass circuits and it can work in degraded mode. Actually, it allows the system to have more reliability by providing power continually to the load even when there are one or more cells failed. Simulation and experimental results are presented and discussed.
Characterization of supercapacitors matrix
International Nuclear Information System (INIS)
Sakka, Monzer Al; Gualous, Hamid; Van Mierlo, Joeri
2010-01-01
This paper treats supercapacitors matrix characterization. In order to cut off transient power peaks and to compensate for the intrinsic limitations in embedded sources, the use of supercapacitors as a storage system is quite suitable, because of their appropriate electrical characteristics (huge capacitance, small series resistance, high specific energy, high specific power), direct storage (energy ready for use), and easy control by power electronic conversion. This use requires supercapacitors modules where several cells connected in serial and/or in parallel, thus a bypass system to balance the charging or the discharging of supercapacitors is required. In the matrix of supercapacitors, six elements of three parallel BCAP0350 supercapacitors in serial connections have been considered. This topology permits to reduce the number of the bypass circuits and it can work in degraded mode. Actually, it allows the system to have more reliability by providing power continually to the load even when there are one or more cells failed. Simulation and experimental results are presented and discussed.
Ceramic matrix and resin matrix composites - A comparison
Hurwitz, Frances I.
1987-01-01
The underlying theory of continuous fiber reinforcement of ceramic matrix and resin matrix composites, their fabrication, microstructure, physical and mechanical properties are contrasted. The growing use of organometallic polymers as precursors to ceramic matrices is discussed as a means of providing low temperature processing capability without the fiber degradation encountered with more conventional ceramic processing techniques. Examples of ceramic matrix composites derived from particulate-filled, high char yield polymers and silsesquioxane precursors are provided.
Ceramic matrix and resin matrix composites: A comparison
Hurwitz, Frances I.
1987-01-01
The underlying theory of continuous fiber reinforcement of ceramic matrix and resin matrix composites, their fabrication, microstructure, physical and mechanical properties are contrasted. The growing use of organometallic polymers as precursors to ceramic matrices is discussed as a means of providing low temperature processing capability without the fiber degradation encountered with more conventional ceramic processing techniques. Examples of ceramic matrix composites derived from particulate-filled, high char yield polymers and silsesquioxane precursors are provided.
International Nuclear Information System (INIS)
Craps, Ben; Sethi, Savdeep; Verlinde, Erik
2005-01-01
The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control
Energy Technology Data Exchange (ETDEWEB)
Craps, Ben [Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands); Sethi, Savdeep [Enrico Fermi Institute, University of Chicago, Chicago, IL 60637 (United States); Verlinde, Erik [Instituut voor Theoretische Fysica, Universiteit van Amsterdam, Valckenierstraat 65, 1018 XE Amsterdam (Netherlands)
2005-10-15
The light-like linear dilaton background represents a particularly simple time-dependent 1/2 BPS solution of critical type-IIA superstring theory in ten dimensions. Its lift to M-theory, as well as its Einstein frame metric, are singular in the sense that the geometry is geodesically incomplete and the Riemann tensor diverges along a light-like subspace of codimension one. We study this background as a model for a big bang type singularity in string theory/M-theory. We construct the dual Matrix theory description in terms of a (1+1)-d supersymmetric Yang-Mills theory on a time-dependent world-sheet given by the Milne orbifold of (1+1)-d Minkowski space. Our model provides a framework in which the physics of the singularity appears to be under control.
Robust and accurate vectorization of line drawings.
Hilaire, Xavier; Tombre, Karl
2006-06-01
This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.
The first accurate description of an aurora
Schröder, Wilfried
2006-12-01
As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.
Accurate Charge Densities from Powder Diffraction
DEFF Research Database (Denmark)
Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob
Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...
Arbitrarily accurate twin composite π -pulse sequences
Torosov, Boyan T.; Vitanov, Nikolay V.
2018-04-01
We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .
Systematization of Accurate Discrete Optimization Methods
Directory of Open Access Journals (Sweden)
V. A. Ovchinnikov
2015-01-01
Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.
Matrix metalloproteinases outside vertebrates.
Marino-Puertas, Laura; Goulas, Theodoros; Gomis-Rüth, F Xavier
2017-11-01
The matrix metalloproteinase (MMP) family belongs to the metzincin clan of zinc-dependent metallopeptidases. Due to their enormous implications in physiology and disease, MMPs have mainly been studied in vertebrates. They are engaged in extracellular protein processing and degradation, and present extensive paralogy, with 23 forms in humans. One characteristic of MMPs is a ~165-residue catalytic domain (CD), which has been structurally studied for 14 MMPs from human, mouse, rat, pig and the oral-microbiome bacterium Tannerella forsythia. These studies revealed close overall coincidence and characteristic structural features, which distinguish MMPs from other metzincins and give rise to a sequence pattern for their identification. Here, we reviewed the literature available on MMPs outside vertebrates and performed database searches for potential MMP CDs in invertebrates, plants, fungi, viruses, protists, archaea and bacteria. These and previous results revealed that MMPs are widely present in several copies in Eumetazoa and higher plants (Tracheophyta), but have just token presence in eukaryotic algae. A few dozen sequences were found in Ascomycota (within fungi) and in double-stranded DNA viruses infecting invertebrates (within viruses). In contrast, a few hundred sequences were found in archaea and >1000 in bacteria, with several copies for some species. Most of the archaeal and bacterial phyla containing potential MMPs are present in human oral and gut microbiomes. Overall, MMP-like sequences are present across all kingdoms of life, but their asymmetric distribution contradicts the vertical descent model from a eubacterial or archaeal ancestor. This article is part of a Special Issue entitled: Matrix Metalloproteinases edited by Rafael Fridman. Copyright © 2017 Elsevier B.V. All rights reserved.
Accurate shear measurement with faint sources
International Nuclear Information System (INIS)
Zhang, Jun; Foucaud, Sebastien; Luo, Wentao
2015-01-01
For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys
How Accurately can we Calculate Thermal Systems?
International Nuclear Information System (INIS)
Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A
2004-01-01
I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors
Accurate control testing for clay liner permeability
Energy Technology Data Exchange (ETDEWEB)
Mitchell, R J
1991-08-01
Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.
DNA-nuclear matrix interactions and ionizing radiation sensitivity
International Nuclear Information System (INIS)
Schwartz, J.L.; Chicago Univ., IL; Vaughan, A.T.M.
1993-01-01
The association between inherent ionizing radiation sensitivity and DNA supercoil unwinding in mammalian cells suggests that the DNA-nuclear matrix attachment region (MAR) plays an important role in radiation response. In radioresistant cells, the MAR structure may exist in a more stable, open configuration, limiting DNA unwinding following strand break induction and maintaining DNA ends in close proximity for more rapid and accurate rejoining. In addition, the open configuration at these matrix attachment sites may serve to facilitate rapid DNA processing of breaks by providing (1) sites for repair proteins to collect and (2) energy to drive enzymatic reactions
Phenomenology of the CKM matrix
International Nuclear Information System (INIS)
Nir, Y.
1989-01-01
The way in which an exact determination of the CKM matrix elements tests the standard Model is demonstrated by a two-generation example. The determination of matrix elements from meson semileptonic decays is explained, with an emphasis on the respective reliability of quark level and meson level calculations. The assumptions involved in the use of loop processes are described. Finally, the state of the art of the knowledge of the CKM matrix is presented. 19 refs., 2 figs
On matrix fractional differential equations
Adem Kılıçman; Wasan Ajeel Ahmood
2017-01-01
The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objec...
Matrix transformations and sequence spaces
International Nuclear Information System (INIS)
Nanda, S.
1983-06-01
In most cases the most general linear operator from one sequence space into another is actually given by an infinite matrix and therefore the theory of matrix transformations has always been of great interest in the study of sequence spaces. The study of general theory of matrix transformations was motivated by the special results in summability theory. This paper is a review article which gives almost all known results on matrix transformations. This also suggests a number of open problems for further study and will be very useful for research workers. (author)
Multivariate Matrix-Exponential Distributions
DEFF Research Database (Denmark)
Bladt, Mogens; Nielsen, Bo Friis
2010-01-01
be written as linear combinations of the elements in the exponential of a matrix. For this reason we shall refer to multivariate distributions with rational Laplace transform as multivariate matrix-exponential distributions (MVME). The marginal distributions of an MVME are univariate matrix......-exponential distributions. We prove a characterization that states that a distribution is an MVME distribution if and only if all non-negative, non-null linear combinations of the coordinates have a univariate matrix-exponential distribution. This theorem is analog to a well-known characterization theorem...
International Nuclear Information System (INIS)
Dorey, Nick; Tong, David; Turner, Carl
2016-01-01
We study a U(N) gauged matrix quantum mechanics which, in the large N limit, is closely related to the chiral WZW conformal field theory. This manifests itself in two ways. First, we construct the left-moving Kac-Moody algebra from matrix degrees of freedom. Secondly, we compute the partition function of the matrix model in terms of Schur and Kostka polynomials and show that, in the large N limit, it coincides with the partition function of the WZW model. This same matrix model was recently shown to describe non-Abelian quantum Hall states and the relationship to the WZW model can be understood in this framework.
International Nuclear Information System (INIS)
Perdicakis, Michel
2012-01-01
Document available in extended abstract form only. In many countries, it is planned that the long life highly radioactive nuclear spent fuel will be stored in deep argillaceous rocks. The sites selected for this purpose are anoxic and satisfy several recommendations as mechanical stability, low permeability and low redox potential. Pyrite (FeS 2 ), iron(II) carbonate, iron(II) bearing clays and organic matter that are present in very small amounts (about 1% w:w) in soils play a major role in their reactivity and are considered today as responsible for the low redox potential values of these sites. In this communication, we describe an electrochemical technique derived from 'Salt matrix voltammetry' and allowing the almost in-situ voltammetric characterization of air-sensitive samples of soils after the only addition of the minimum humidity required for electrolytic conduction. Figure 1 shows the principle of the developed technique. It consists in the entrapment of the clay sample between a graphite working electrode and a silver counter/quasi-reference electrode. The sample was previously humidified by passing a water saturated inert gas through the electrochemical cell. The technique leads to well-defined voltammetric responses of the electro-active components of the clays. Figure 2 shows a typical voltammogram relative to a Callovo-Oxfordian argillite sample from Bure, the French place planned for the underground nuclear waste disposal. During the direct scan, one can clearly distinguish the anodic voltammetric signals for the oxidation of the iron (II) species associated with the clay and the oxidation of pyrite. The reverse scan displays a small cathodic signal for the reduction of iron (III) associated with the clay that demonstrates that the majority of the previously oxidized iron (II) species were transformed into iron (III) oxides reducible at lower potentials. When a second voltammetric cycle is performed, one can notice that the signal for iron (II
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
An accurate nonlinear Monte Carlo collision operator
International Nuclear Information System (INIS)
Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.
1995-03-01
A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)
Accurate predictions for the LHC made easy
CERN. Geneva
2014-01-01
The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.
Apparatus for accurately measuring high temperatures
Smith, D.D.
The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.
Accurate Modeling Method for Cu Interconnect
Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko
This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.
Ceramic matrix composite article and process of fabricating a ceramic matrix composite article
Cairo, Ronald Robert; DiMascio, Paul Stephen; Parolini, Jason Robert
2016-01-12
A ceramic matrix composite article and a process of fabricating a ceramic matrix composite are disclosed. The ceramic matrix composite article includes a matrix distribution pattern formed by a manifold and ceramic matrix composite plies laid up on the matrix distribution pattern, includes the manifold, or a combination thereof. The manifold includes one or more matrix distribution channels operably connected to a delivery interface, the delivery interface configured for providing matrix material to one or more of the ceramic matrix composite plies. The process includes providing the manifold, forming the matrix distribution pattern by transporting the matrix material through the manifold, and contacting the ceramic matrix composite plies with the matrix material.
Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi
2015-02-01
With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.
Strategy BMT Al-Ittihad Using Matrix IE, Matrix SWOT 8K, Matrix SPACE and Matrix TWOS
Directory of Open Access Journals (Sweden)
Nofrizal Nofrizal
2018-03-01
Full Text Available This research aims to formulate and select BMT Al-Ittihad Rumbai strategy to face the changing of business environment both from internal environment such as organization resources, finance, member and external business such as competitor, economy, politics and others. This research method used Analysis of EFAS, IFAS, IE Matrix, SWOT-8K Matrix, SPACE Matrix and TWOS Matrix. our hope from this research it can assist BMT Al-Ittihad in formulating and selecting strategies for the sustainability of BMT Al-Ittihad in the future. The sample in this research is using purposive sampling technique that is the manager and leader of BMT Al-IttihadRumbaiPekanbaru. The result of this research shows that the position of BMT Al-Ittihad using IE Matrix, SWOT-8K Matrix and SPACE Matrix is in growth position, stabilization and aggressive. The choice of strategy after using TWOS Matrix is market penetration, market development, vertical integration, horizontal integration, and stabilization (careful.
Jairam, Dharmananda; Kiewra, Kenneth A.; Kauffman, Douglas F.; Zhao, Ruomeng
2012-01-01
This study investigated how best to study a matrix. Fifty-three participants studied a matrix topically (1 column at a time), categorically (1 row at a time), or in a unified way (all at once). Results revealed that categorical and unified study produced higher: (a) performance on relationship and fact tests, (b) study material satisfaction, and…
Bulk metallic glass matrix composites
International Nuclear Information System (INIS)
Choi-Yim, H.; Johnson, W.L.
1997-01-01
Composites with a bulk metallic glass matrix were synthesized and characterized. This was made possible by the recent development of bulk metallic glasses that exhibit high resistance to crystallization in the undercooled liquid state. In this letter, experimental methods for processing metallic glass composites are introduced. Three different bulk metallic glass forming alloys were used as the matrix materials. Both ceramics and metals were introduced as reinforcement into the metallic glass. The metallic glass matrix remained amorphous after adding up to a 30 vol% fraction of particles or short wires. X-ray diffraction patterns of the composites show only peaks from the second phase particles superimposed on the broad diffuse maxima from the amorphous phase. Optical micrographs reveal uniformly distributed particles in the matrix. The glass transition of the amorphous matrix and the crystallization behavior of the composites were studied by calorimetric methods. copyright 1997 American Institute of Physics
Machining of Metal Matrix Composites
2012-01-01
Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...
Quantum mechanics in matrix form
Ludyk, Günter
2018-01-01
This book gives an introduction to quantum mechanics with the matrix method. Heisenberg's matrix mechanics is described in detail. The fundamental equations are derived by algebraic methods using matrix calculus. Only a brief description of Schrödinger's wave mechanics is given (in most books exclusively treated), to show their equivalence to Heisenberg's matrix method. In the first part the historical development of Quantum theory by Planck, Bohr and Sommerfeld is sketched, followed by the ideas and methods of Heisenberg, Born and Jordan. Then Pauli's spin and exclusion principles are treated. Pauli's exclusion principle leads to the structure of atoms. Finally, Dirac´s relativistic quantum mechanics is shortly presented. Matrices and matrix equations are today easy to handle when implementing numerical algorithms using standard software as MAPLE and Mathematica.
Strong factor in the SO(2,3) S matrix
International Nuclear Information System (INIS)
Amado, R.D.; Sparrow, D.A.
1986-01-01
The group theoretic S matrix of Alhassid, Iachello, and Wu is factorable into a product of Coulomb and strong factors. The strong factor is examined with a view to relating it to more fa- miliar potential and phase shift descriptions. We find simple approximate expressions for the phase shifts which are very accurate for heavy-ion-type applications. For peripheral scattering it is possible to obtain simple expressions relating the strong factor to an effective potential
Accurate Holdup Calculations with Predictive Modeling & Data Integration
Energy Technology Data Exchange (ETDEWEB)
Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering
2017-04-03
Bayes’ Theorem, one must have a model y(x) that maps the state variables x (the solution in this case) to the measurements y. In this case, the unknown state variables are the configuration and composition of the heldup SNM. The measurements are the detector readings. Thus, the natural model is neutral-particle radiation transport where a wealth of computational tools exists for performing these simulations accurately and efficiently. The combination of predictive model and Bayesian inference forms the Data Integration with Modeled Predictions (DIMP) method that serves as foundation for this project. The cost functional describing the model-to-data misfit is computed via a norm created by the inverse of the covariance matrix of the model parameters and responses. Since the model y(x) for the holdup problem is nonlinear, a nonlinear optimization on Q is conducted via Newton-type iterative methods to find the optimal values of the model parameters x. This project comprised a collaboration between NC State University (NCSU), the University of South Carolina (USC), and Oak Ridge National Laboratory (ORNL). The project was originally proposed in seven main tasks with an eighth contingency task to be performed if time and funding permitted; in fact time did not permit commencement of the contingency task and it was not performed. The remaining tasks involved holdup analysis with gamma detection strategies and separately with neutrons based on coincidence counting. Early in the project, and upon consultation with experts in coincidence counting it became evident that this approach is not viable for holdup applications and this task was replaced with an alternative, but valuable investigation that was carried out by the USC partner. Nevertheless, the experimental 4 measurements at ORNL of both gamma and neutron sources for the purpose of constructing Detector Response Functions (DRFs) with the associated uncertainties were indeed completed.
Dynamics Analysis for Hydroturbine Regulating System Based on Matrix Model
Directory of Open Access Journals (Sweden)
Jiafu Wei
2017-01-01
Full Text Available The hydraulic turbine model is the key factor which affects the analysis precision of the hydraulic turbine governing system. This paper discusses the basic principle of the hydraulic turbine matrix model and gives two methods to realize. Using the characteristic matrix to describe unit flow and torque and their relationship with the opening and unit speed, it can accurately represent the nonlinear characteristics of the turbine, effectively improve the convergence of simulation process, and meet the needs of high precision real-time simulation of power system. Through the simulation of a number of power stations, it indicates that, by analyzing the dynamic process of the hydraulic turbine regulating with 5-order matrix model, the calculation results and field test data will have good consistency, and it can better meet the needs of power system dynamic simulation.
Comprehensive T-Matrix Reference Database: A 2012 - 2013 Update
Mishchenko, Michael I.; Videen, Gorden; Khlebtsov, Nikolai G.; Wriedt, Thomas
2013-01-01
The T-matrix method is one of the most versatile, efficient, and accurate theoretical techniques widely used for numerically exact computer calculations of electromagnetic scattering by single and composite particles, discrete random media, and particles imbedded in complex environments. This paper presents the fifth update to the comprehensive database of peer-reviewed T-matrix publications initiated by us in 2004 and includes relevant publications that have appeared since 2012. It also lists several earlier publications not incorporated in the original database, including Peter Waterman's reports from the 1960s illustrating the history of the T-matrix approach and demonstrating that John Fikioris and Peter Waterman were the true pioneers of the multi-sphere method otherwise known as the generalized Lorenz - Mie theory.
Quasinormal-Mode Expansion of the Scattering Matrix
Directory of Open Access Journals (Sweden)
Filippo Alpeggiani
2017-06-01
Full Text Available It is well known that the quasinormal modes (or resonant states of photonic structures can be associated with the poles of the scattering matrix of the system in the complex-frequency plane. In this work, the inverse problem, i.e., the reconstruction of the scattering matrix from the knowledge of the quasinormal modes, is addressed. We develop a general and scalable quasinormal-mode expansion of the scattering matrix, requiring only the complex eigenfrequencies and the far-field properties of the eigenmodes. The theory is validated by applying it to illustrative nanophotonic systems with multiple overlapping electromagnetic modes. The examples demonstrate that our theory provides an accurate first-principles prediction of the scattering properties, without the need for postulating ad hoc nonresonant channels.
Accurate measurements of neutron activation cross sections
International Nuclear Information System (INIS)
Semkova, V.
1999-01-01
The applications of some recent achievements of neutron activation method on high intensity neutron sources are considered from the view point of associated errors of cross sections data for neutron induced reaction. The important corrections in -y-spectrometry insuring precise determination of the induced radioactivity, methods for accurate determination of the energy and flux density of neutrons, produced by different sources, and investigations of deuterium beam composition are considered as factors determining the precision of the experimental data. The influence of the ion beam composition on the mean energy of neutrons has been investigated by measurement of the energy of neutrons induced by different magnetically analysed deuterium ion groups. Zr/Nb method for experimental determination of the neutron energy in the 13-15 MeV energy range allows to measure energy of neutrons from D-T reaction with uncertainty of 50 keV. Flux density spectra from D(d,n) E d = 9.53 MeV and Be(d,n) E d = 9.72 MeV are measured by PHRS and foil activation method. Future applications of the activation method on NG-12 are discussed. (author)
Implicit time accurate simulation of unsteady flow
van Buuren, René; Kuerten, Hans; Geurts, Bernard J.
2001-03-01
Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright
Spectrally accurate initial data in numerical relativity
Battista, Nicholas A.
Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.
A stiffly accurate integrator for elastodynamic problems
Michels, Dominik L.
2017-07-21
We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.
Geodetic analysis of disputed accurate qibla direction
Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah
2018-04-01
Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.
Structured decomposition design of partial Mueller matrix polarimeters.
Alenin, Andrey S; Scott Tyo, J
2015-07-01
Partial Mueller matrix polarimeters (pMMPs) are active sensing instruments that probe a scattering process with a set of polarization states and analyze the scattered light with a second set of polarization states. Unlike conventional Mueller matrix polarimeters, pMMPs do not attempt to reconstruct the entire Mueller matrix. With proper choice of generator and analyzer states, a subset of the Mueller matrix space can be reconstructed with fewer measurements than that of the full Mueller matrix polarimeter. In this paper we consider the structure of the Mueller matrix and our ability to probe it using a reduced number of measurements. We develop analysis tools that allow us to relate the particular choice of generator and analyzer polarization states to the portion of Mueller matrix space that the instrument measures, as well as develop an optimization method that is based on balancing the signal-to-noise ratio of the resulting instrument with the ability of that instrument to accurately measure a particular set of desired polarization components with as few measurements as possible. In the process, we identify 10 classes of pMMP systems, for which the space coverage is immediately known. We demonstrate the theory with a numerical example that designs partial polarimeters for the task of monitoring the damage state of a material as presented earlier by Hoover and Tyo [Appl. Opt.46, 8364 (2007)10.1364/AO.46.008364APOPAI1559-128X]. We show that we can reduce the polarimeter to making eight measurements while still covering the Mueller matrix subspace spanned by the objects.
Containment Code Validation Matrix
International Nuclear Information System (INIS)
Chin, Yu-Shan; Mathew, P.M.; Glowa, Glenn; Dickson, Ray; Liang, Zhe; Leitch, Brian; Barber, Duncan; Vasic, Aleks; Bentaib, Ahmed; Journeau, Christophe; Malet, Jeanne; Studer, Etienne; Meynet, Nicolas; Piluso, Pascal; Gelain, Thomas; Michielsen, Nathalie; Peillon, Samuel; Porcheron, Emmanuel; Albiol, Thierry; Clement, Bernard; Sonnenkalb, Martin; Klein-Hessling, Walter; Arndt, Siegfried; Weber, Gunter; Yanez, Jorge; Kotchourko, Alexei; Kuznetsov, Mike; Sangiorgi, Marco; Fontanet, Joan; Herranz, Luis; Garcia De La Rua, Carmen; Santiago, Aleza Enciso; Andreani, Michele; Paladino, Domenico; Dreier, Joerg; Lee, Richard; Amri, Abdallah
2014-01-01
The Committee on the Safety of Nuclear Installations (CSNI) formed the CCVM (Containment Code Validation Matrix) task group in 2002. The objective of this group was to define a basic set of available experiments for code validation, covering the range of containment (ex-vessel) phenomena expected in the course of light and heavy water reactor design basis accidents and beyond design basis accidents/severe accidents. It was to consider phenomena relevant to pressurised heavy water reactor (PHWR), pressurised water reactor (PWR) and boiling water reactor (BWR) designs of Western origin as well as of Eastern European VVER types. This work would complement the two existing CSNI validation matrices for thermal hydraulic code validation (NEA/CSNI/R(1993)14) and In-vessel core degradation (NEA/CSNI/R(2001)21). The report initially provides a brief overview of the main features of a PWR, BWR, CANDU and VVER reactors. It also provides an overview of the ex-vessel corium retention (core catcher). It then provides a general overview of the accident progression for light water and heavy water reactors. The main focus is to capture most of the phenomena and safety systems employed in these reactor types and to highlight the differences. This CCVM contains a description of 127 phenomena, broken down into 6 categories: - Containment Thermal-hydraulics Phenomena; - Hydrogen Behaviour (Combustion, Mitigation and Generation) Phenomena; - Aerosol and Fission Product Behaviour Phenomena; - Iodine Chemistry Phenomena; - Core Melt Distribution and Behaviour in Containment Phenomena; - Systems Phenomena. A synopsis is provided for each phenomenon, including a description, references for further information, significance for DBA and SA/BDBA and a list of experiments that may be used for code validation. The report identified 213 experiments, broken down into the same six categories (as done for the phenomena). An experiment synopsis is provided for each test. Along with a test description
Oehlmann, Dietmar; Ohlmann, Odile M.; Danzebrink, Hans U.
2005-04-01
perform this exchange, as a matrix, understood as source, of new ideas.
Direct determination of scattering time delays using the R-matrix propagation method
International Nuclear Information System (INIS)
Walker, R.B.; Hayes, E.F.
1989-01-01
A direct method for determining time delays for scattering processes is developed using the R-matrix propagation method. The procedure involves the simultaneous generation of the global R matrix and its energy derivative. The necessary expressions to obtain the energy derivative of the S matrix are relatively simple and involve many of the same matrix elements required for the R-matrix propagation method. This method is applied to a simple model for a chemical reaction that displays sharp resonance features. The test results of the direct method are shown to be in excellent agreement with the traditional numerical differentiation method for scattering energies near the resonance energy. However, for sharp resonances the numerical differentiation method requires calculation of the S-matrix elements at many closely spaced energies. Since the direct method presented here involves calculations at only a single energy, one is able to generate accurate energy derivatives and time delays much more efficiently and reliably
Measuring methods of matrix diffusion
International Nuclear Information System (INIS)
Muurinen, A.; Valkiainen, M.
1988-03-01
In Finland the spent nuclear fuel is planned to be disposed of at large depths in crystalline bedrock. The radionuclides which are dissolved in the groundwater may be able to diffuse into the micropores of the porous rock matrix and thus be withdrawn from the flowing water in the fractures. This phenomenon is called matrix diffusion. A review over matrix diffusion is presented in the study. The main interest is directed to the diffusion of non-sorbing species. The review covers diffusion experiments and measurements of porosity, pore size, specific surface area and water permeability
Maximal quantum Fisher information matrix
International Nuclear Information System (INIS)
Chen, Yu; Yuan, Haidong
2017-01-01
We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)
Accurate deuterium spectroscopy for fundamental studies
Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.
2018-07-01
We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.
Towards Accurate Application Characterization for Exascale (APEX)
Energy Technology Data Exchange (ETDEWEB)
Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)
2015-09-01
Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.
How flatbed scanners upset accurate film dosimetry
van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.
Accurate hydrocarbon estimates attained with radioactive isotope
International Nuclear Information System (INIS)
Hubbard, G.
1983-01-01
To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample
How flatbed scanners upset accurate film dosimetry
International Nuclear Information System (INIS)
Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H
2016-01-01
Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)
Energy Technology Data Exchange (ETDEWEB)
Huang, P-C; Hsu, C-H [Department of Biomedical Engineering and Environmental Sciences, National Tsing Hua University, Hsinchu, Taiwan (China); Hsiao, I-T [Department Medical Imaging and Radiological Sciences, Chang Gung University, Tao-Yuan, Taiwan (China); Lin, K M [Medical Engineering Research Division, National Health Research Institutes, Zhunan Town, Miaoli County, Taiwan (China)], E-mail: cghsu@mx.nthu.edu.tw
2009-06-15
Accurate modeling of the photon acquisition process in pinhole SPECT is essential for optimizing resolution. In this work, the authors develop an accurate system model in which pinhole finite aperture and depth-dependent geometric sensitivity are explicitly included. To achieve high-resolution pinhole SPECT, the voxel size is usually set in the range of sub-millimeter so that the total number of image voxels increase accordingly. It is inevitably that a system matrix that models a variety of favorable physical factors will become extremely sophisticated. An efficient implementation for such an accurate system model is proposed in this research. We first use the geometric symmetries to reduce redundant entries in the matrix. Due to the sparseness of the matrix, only non-zero terms are stored. A novel center-to-radius recording rule is also developed to effectively describe the relation between a voxel and its related detectors at every projection angle. The proposed system matrix is also suitable for multi-threaded computing. Finally, the accuracy and effectiveness of the proposed system model is evaluated in a workstation equipped with two Quad-Core Intel X eon processors.
Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.
2016-07-01
Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.
National Oceanic and Atmospheric Administration, Department of Commerce — This data set was taken from CRD 08-18 at the NEFSC. Specifically, the Gulf of Maine diet matrix was developed for the EMAX exercise described in that center...
On matrix fractional differential equations
Directory of Open Access Journals (Sweden)
Adem Kılıçman
2017-01-01
Full Text Available The aim of this article is to study the matrix fractional differential equations and to find the exact solution for system of matrix fractional differential equations in terms of Riemann–Liouville using Laplace transform method and convolution product to the Riemann–Liouville fractional of matrices. Also, we show the theorem of non-homogeneous matrix fractional partial differential equation with some illustrative examples to demonstrate the effectiveness of the new methodology. The main objective of this article is to discuss the Laplace transform method based on operational matrices of fractional derivatives for solving several kinds of linear fractional differential equations. Moreover, we present the operational matrices of fractional derivatives with Laplace transform in many applications of various engineering systems as control system. We present the analytical technique for solving fractional-order, multi-term fractional differential equation. In other words, we propose an efficient algorithm for solving fractional matrix equation.
Electromagnetic matrix elements in baryons
International Nuclear Information System (INIS)
Lipkin, H.J.; Moinester, M.A.
1992-01-01
Some simple symmetry relations between matrix elements of electromagnetic operators are investigated. The implications are discussed for experiments to study hyperon radiative transitions and polarizabilities and form factors. (orig.)
International Nuclear Information System (INIS)
Descouvemont, P; Baye, D
2010-01-01
The different facets of the R-matrix method are presented pedagogically in a general framework. Two variants have been developed over the years: (i) The 'calculable' R-matrix method is a calculational tool to derive scattering properties from the Schroedinger equation in a large variety of physical problems. It was developed rather independently in atomic and nuclear physics with too little mutual influence. (ii) The 'phenomenological' R-matrix method is a technique to parametrize various types of cross sections. It was mainly (or uniquely) used in nuclear physics. Both directions are explained by starting from the simple problem of scattering by a potential. They are illustrated by simple examples in nuclear and atomic physics. In addition to elastic scattering, the R-matrix formalism is applied to inelastic and radiative-capture reactions. We also present more recent and more ambitious applications of the theory in nuclear physics.
Random matrix improved subspace clustering
Couillet, Romain; Kammoun, Abla
2017-01-01
This article introduces a spectral method for statistical subspace clustering. The method is built upon standard kernel spectral clustering techniques, however carefully tuned by theoretical understanding arising from random matrix findings. We show
Matrix Effects in XRF Measurements
International Nuclear Information System (INIS)
Kandil, A.T.; Gabr, N.A.; El-Aryan, S.M.
2015-01-01
This research treats the matrix effect on XRF measurements. The problem is treated by preparing general oxide program, which contains many samples that represent all materials in cement factories, then by using T rail Lachance m ethod to correct errors of matrix effect. This work compares the effect of using lithium tetraborate or sodium tetraborate as a fluxing agent in terms of accuracy and economic cost
Matrix analysis of electrical machinery
Hancock, N N
2013-01-01
Matrix Analysis of Electrical Machinery, Second Edition is a 14-chapter edition that covers the systematic analysis of electrical machinery performance. This edition discusses the principles of various mathematical operations and their application to electrical machinery performance calculations. The introductory chapters deal with the matrix representation of algebraic equations and their application to static electrical networks. The following chapters describe the fundamentals of different transformers and rotating machines and present torque analysis in terms of the currents based on the p
Staggered chiral random matrix theory
International Nuclear Information System (INIS)
Osborn, James C.
2011-01-01
We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.
An accurate solution of point reactor neutron kinetics equations of multi-group of delayed neutrons
International Nuclear Information System (INIS)
Yamoah, S.; Akaho, E.H.K.; Nyarko, B.J.B.
2013-01-01
Highlights: ► Analytical solution is proposed to solve the point reactor kinetics equations (PRKE). ► The method is based on formulating a coefficient matrix of the PRKE. ► The method was applied to solve the PRKE for six groups of delayed neutrons. ► Results shows good agreement with other traditional methods in literature. ► The method is accurate and efficient for solving the point reactor kinetics equations. - Abstract: The understanding of the time-dependent behaviour of the neutron population in a nuclear reactor in response to either a planned or unplanned change in the reactor conditions is of great importance to the safe and reliable operation of the reactor. In this study, an accurate analytical solution of point reactor kinetics equations with multi-group of delayed neutrons for specified reactivity changes is proposed to calculate the change in neutron density. The method is based on formulating a coefficient matrix of the homogenous differential equations of the point reactor kinetics equations and calculating the eigenvalues and the corresponding eigenvectors of the coefficient matrix. A small time interval is chosen within which reactivity relatively stays constant. The analytical method was applied to solve the point reactor kinetics equations with six-groups delayed neutrons for a representative thermal reactor. The problems of step, ramp and temperature feedback reactivities are computed and the results compared with other traditional methods. The comparison shows that the method presented in this study is accurate and efficient for solving the point reactor kinetics equations of multi-group of delayed neutrons
Directory of Open Access Journals (Sweden)
F. Coccetti
2003-01-01
Full Text Available In this contribution we present an accurate investigation of three different techniques for the modeling of complex planar circuits. The em analysis is performed by means of different electromagnetic full-wave solvers in the timedomain and in the frequency-domain. The first one is the Transmission Line Matrix (TLM method. In the second one the TLM method is combined with the Integral Equation (IE method. The latter is based on the Generalized Transverse Resonance Diffraction (GTRD. In order to test the methods we model different structures and compare the calculated Sparameters to measured results, with good agreement.
PRODUCT PORTFOLIO ANALYSIS - ARTHUR D. LITTLE MATRIX
Directory of Open Access Journals (Sweden)
Curmei Catalin Valeriu
2011-07-01
Full Text Available In recent decades we have witnessed an unseen dynamism among companies, which is explained by their desire to engage in more activities that provide a high level of development and diversification. Thus, as companies are diversifying more and more, their managers confront a number of challenges arising from the management of resources for the product portfolio and the low level of resources with which companies can identify, at a time. Responding to these challenges, over time were developed a series of analytical product portfolio methods through which managers can balance the sources of cash flows from the multiple products and also can identify the place and role of products, in strategic terms, within the product portfolio. In order to identify these methods the authors of the present paper have conducted a desk research in order to analyze the strategic marketing and management literature of the last 2 decades. Widely were studied a series of methods that are presented in the marketing and management literature as the main instruments used within the product portfolio strategic planning process. Among these methods we focused on the Arthur D. Little matrix. Thus the present paper has the purpose to outline the characteristics and strategic implications of the ADL matrix within a company’s product portfolio. After conducting this analysis we have found that restricting the product portfolio analysis to the A.D.L. matrix is not a very wise decision. The A.D.L. matrix among with other marketing tools of product portfolio analysis have some advantages and disadvantages and is trying to provide, at a time, a specific diagnosis of a company’s product portfolio. Therefore, the recommendation for the Romanian managers consists in a combined use of a wide range of tools and techniques for product portfolio analysis. This leads to a better understanding of the whole mix of product markets, included in portfolio analysis, the strategic position
EISPACK, Subroutines for Eigenvalues, Eigenvectors, Matrix Operations
International Nuclear Information System (INIS)
Garbow, Burton S.; Cline, A.K.; Meyering, J.
1993-01-01
1 - Description of problem or function: EISPACK3 is a collection of 75 FORTRAN subroutines, both single- and double-precision, that compute the eigenvalues and eigenvectors of nine classes of matrices. The package can determine the Eigen-system of complex general, complex Hermitian, real general, real symmetric, real symmetric band, real symmetric tridiagonal, special real tridiagonal, generalized real, and generalized real symmetric matrices. In addition, there are two routines which use the singular value decomposition to solve certain least squares problem. The individual subroutines are - Identification/Description: BAKVEC: Back transform vectors of matrix formed by FIGI; BALANC: Balance a real general matrix; BALBAK: Back transform vectors of matrix formed by BALANC; BANDR: Reduce sym. band matrix to sym. tridiag. matrix; BANDV: Find some vectors of sym. band matrix; BISECT: Find some values of sym. tridiag. matrix; BQR: Find some values of sym. band matrix; CBABK2: Back transform vectors of matrix formed by CBAL; CBAL: Balance a complex general matrix; CDIV: Perform division of two complex quantities; CG: Driver subroutine for a complex general matrix; CH: Driver subroutine for a complex Hermitian matrix; CINVIT: Find some vectors of complex Hess. matrix; COMBAK: Back transform vectors of matrix formed by COMHES; COMHES: Reduce complex matrix to complex Hess. (elementary); COMLR: Find all values of complex Hess. matrix (LR); COMLR2: Find all values/vectors of cmplx Hess. matrix (LR); CCMQR: Find all values of complex Hessenberg matrix (QR); COMQR2: Find all values/vectors of cmplx Hess. matrix (QR); CORTB: Back transform vectors of matrix formed by CORTH; CORTH: Reduce complex matrix to complex Hess. (unitary); CSROOT: Find square root of complex quantity; ELMBAK: Back transform vectors of matrix formed by ELMHES; ELMHES: Reduce real matrix to real Hess. (elementary); ELTRAN: Accumulate transformations from ELMHES (for HQR2); EPSLON: Estimate unit roundoff
An efficient and accurate 3D displacements tracking strategy for digital volume correlation
Pan, Bing
2014-07-01
Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.
An efficient and accurate 3D displacements tracking strategy for digital volume correlation
Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles
2014-07-01
Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.
A survey of matrix theory and matrix inequalities
Marcus, Marvin
2010-01-01
Written for advanced undergraduate students, this highly regarded book presents an enormous amount of information in a concise and accessible format. Beginning with the assumption that the reader has never seen a matrix before, the authors go on to provide a survey of a substantial part of the field, including many areas of modern research interest.Part One of the book covers not only the standard ideas of matrix theory, but ones, as the authors state, ""that reflect our own prejudices,"" among them Kronecker products, compound and induced matrices, quadratic relations, permanents, incidence
Octonionic matrix representation and electromagnetism
Energy Technology Data Exchange (ETDEWEB)
Chanyal, B. C. [Kumaun University, S. S. J. Campus, Almora (India)
2014-12-15
Keeping in mind the important role of octonion algebra, we have obtained the electromagnetic field equations of dyons with an octonionic 8 x 8 matrix representation. In this paper, we consider the eight - dimensional octonionic space as a combination of two (external and internal) four-dimensional spaces for the existence of magnetic monopoles (dyons) in a higher-dimensional formalism. As such, we describe the octonion wave equations in terms of eight components from the 8 x 8 matrix representation. The octonion forms of the generalized potential, fields and current source of dyons in terms of 8 x 8 matrix are discussed in a consistent manner. Thus, we have obtained the generalized Dirac-Maxwell equations of dyons from an 8x8 matrix representation of the octonion wave equations in a compact and consistent manner. The generalized Dirac-Maxwell equations are fully symmetric Maxwell equations and allow for the possibility of magnetic charges and currents, analogous to electric charges and currents. Accordingly, we have obtained the octonionic Dirac wave equations in an external field from the matrix representation of the octonion-valued potentials of dyons.
Legendre Wavelet Operational Matrix Method for Solution of Riccati Differential Equation
Directory of Open Access Journals (Sweden)
S. Balaji
2014-01-01
Full Text Available A Legendre wavelet operational matrix method (LWM is presented for the solution of nonlinear fractional-order Riccati differential equations, having variety of applications in quantum chemistry and quantum mechanics. The fractional-order Riccati differential equations converted into a system of algebraic equations using Legendre wavelet operational matrix. Solutions given by the proposed scheme are more accurate and reliable and they are compared with recently developed numerical, analytical, and stochastic approaches. Comparison shows that the proposed LWM approach has a greater performance and less computational effort for getting accurate solutions. Further existence and uniqueness of the proposed problem are given and moreover the condition of convergence is verified.
International Nuclear Information System (INIS)
Heggarty, J.W.
1999-06-01
For almost thirty years, sequential R-matrix computation has been used by atomic physics research groups, from around the world, to model collision phenomena involving the scattering of electrons or positrons with atomic or molecular targets. As considerable progress has been made in the understanding of fundamental scattering processes, new data, obtained from more complex calculations, is of current interest to experimentalists. Performing such calculations, however, places considerable demands on the computational resources to be provided by the target machine, in terms of both processor speed and memory requirement. Indeed, in some instances the computational requirements are so great that the proposed R-matrix calculations are intractable, even when utilising contemporary classic supercomputers. Historically, increases in the computational requirements of R-matrix computation were accommodated by porting the problem codes to a more powerful classic supercomputer. Although this approach has been successful in the past, it is no longer considered to be a satisfactory solution due to the limitations of current (and future) Von Neumann machines. As a consequence, there has been considerable interest in the high performance multicomputers, that have emerged over the last decade which appear to offer the computational resources required by contemporary R-matrix research. Unfortunately, developing codes for these machines is not as simple a task as it was to develop codes for successive classic supercomputers. The difficulty arises from the considerable differences in the computing models that exist between the two types of machine and results in the programming of multicomputers to be widely acknowledged as a difficult, time consuming and error-prone task. Nevertheless, unless parallel R-matrix computation is realised, important theoretical and experimental atomic physics research will continue to be hindered. This thesis describes work that was undertaken in
Numerical methods in matrix computations
Björck, Åke
2015-01-01
Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.
Lectures on matrix field theory
Ydri, Badis
2017-01-01
These lecture notes provide a systematic introduction to matrix models of quantum field theories with non-commutative and fuzzy geometries. The book initially focuses on the matrix formulation of non-commutative and fuzzy spaces, followed by a description of the non-perturbative treatment of the corresponding field theories. As an example, the phase structure of non-commutative phi-four theory is treated in great detail, with a separate chapter on the multitrace approach. The last chapter offers a general introduction to non-commutative gauge theories, while two appendices round out the text. Primarily written as a self-study guide for postgraduate students – with the aim of pedagogically introducing them to key analytical and numerical tools, as well as useful physical models in applications – these lecture notes will also benefit experienced researchers by providing a reference guide to the fundamentals of non-commutative field theory with an emphasis on matrix models and fuzzy geometries.
Supersymmetry in random matrix theory
International Nuclear Information System (INIS)
Kieburg, Mario
2010-01-01
I study the applications of supersymmetry in random matrix theory. I generalize the supersymmetry method and develop three new approaches to calculate eigenvalue correlation functions. These correlation functions are averages over ratios of characteristic polynomials. In the first part of this thesis, I derive a relation between integrals over anti-commuting variables (Grassmann variables) and differential operators with respect to commuting variables. With this relation I rederive Cauchy- like integral theorems. As a new application I trace the supermatrix Bessel function back to a product of two ordinary matrix Bessel functions. In the second part, I apply the generalized Hubbard-Stratonovich transformation to arbitrary rotation invariant ensembles of real symmetric and Hermitian self-dual matrices. This extends the approach for unitarily rotation invariant matrix ensembles. For the k-point correlation functions I derive supersymmetric integral expressions in a unifying way. I prove the equivalence between the generalized Hubbard-Stratonovich transformation and the superbosonization formula. Moreover, I develop an alternative mapping from ordinary space to superspace. After comparing the results of this approach with the other two supersymmetry methods, I obtain explicit functional expressions for the probability densities in superspace. If the probability density of the matrix ensemble factorizes, then the generating functions exhibit determinantal and Pfaffian structures. For some matrix ensembles this was already shown with help of other approaches. I show that these structures appear by a purely algebraic manipulation. In this new approach I use structures naturally appearing in superspace. I derive determinantal and Pfaffian structures for three types of integrals without actually mapping onto superspace. These three types of integrals are quite general and, thus, they are applicable to a broad class of matrix ensembles. (orig.)
Supersymmetry in random matrix theory
Energy Technology Data Exchange (ETDEWEB)
Kieburg, Mario
2010-05-04
I study the applications of supersymmetry in random matrix theory. I generalize the supersymmetry method and develop three new approaches to calculate eigenvalue correlation functions. These correlation functions are averages over ratios of characteristic polynomials. In the first part of this thesis, I derive a relation between integrals over anti-commuting variables (Grassmann variables) and differential operators with respect to commuting variables. With this relation I rederive Cauchy- like integral theorems. As a new application I trace the supermatrix Bessel function back to a product of two ordinary matrix Bessel functions. In the second part, I apply the generalized Hubbard-Stratonovich transformation to arbitrary rotation invariant ensembles of real symmetric and Hermitian self-dual matrices. This extends the approach for unitarily rotation invariant matrix ensembles. For the k-point correlation functions I derive supersymmetric integral expressions in a unifying way. I prove the equivalence between the generalized Hubbard-Stratonovich transformation and the superbosonization formula. Moreover, I develop an alternative mapping from ordinary space to superspace. After comparing the results of this approach with the other two supersymmetry methods, I obtain explicit functional expressions for the probability densities in superspace. If the probability density of the matrix ensemble factorizes, then the generating functions exhibit determinantal and Pfaffian structures. For some matrix ensembles this was already shown with help of other approaches. I show that these structures appear by a purely algebraic manipulation. In this new approach I use structures naturally appearing in superspace. I derive determinantal and Pfaffian structures for three types of integrals without actually mapping onto superspace. These three types of integrals are quite general and, thus, they are applicable to a broad class of matrix ensembles. (orig.)
Polychoric/Tetrachoric Matrix or Pearson Matrix? A methodological study
Directory of Open Access Journals (Sweden)
Dominguez Lara, Sergio Alexis
2014-04-01
Full Text Available The use of product-moment correlation of Pearson is common in most studies in factor analysis in psychology, but it is known that this statistic is only applicable when the variables related are in interval scale and normally distributed, and when are used in ordinal data may to produce a distorted correlation matrix . Thus is a suitable option using polychoric/tetrachoric matrices in item-level factor analysis when the items are in level measurement nominal or ordinal. The aim of this study was to show the differences in the KMO, Bartlett`s Test and Determinant of the Matrix, percentage of variance explained and factor loadings in depression trait scale of Depression Inventory Trait - State and the Neuroticism dimension of the short form of the Eysenck Personality Questionnaire -Revised, regarding the use of matrices polychoric/tetrachoric matrices and Pearson. These instruments was analyzed with different extraction methods (Maximum Likelihood, Minimum Rank Factor Analysis, Unweighted Least Squares and Principal Components, keeping constant the rotation method Promin were analyzed. Were observed differences regarding sample adequacy measures, as well as with respect to the explained variance and the factor loadings, for solutions having as polychoric/tetrachoric matrix. So it can be concluded that the polychoric / tetrachoric matrix give better results than Pearson matrices when it comes to item-level factor analysis using different methods.
Towards Google matrix of brain
Energy Technology Data Exchange (ETDEWEB)
Shepelyansky, D.L., E-mail: dima@irsamc.ups-tlse.f [Laboratoire de Physique Theorique (IRSAMC), Universite de Toulouse, UPS, F-31062 Toulouse (France); LPT - IRSAMC, CNRS, F-31062 Toulouse (France); Zhirov, O.V. [Budker Institute of Nuclear Physics, 630090 Novosibirsk (Russian Federation)
2010-07-12
We apply the approach of the Google matrix, used in computer science and World Wide Web, to description of properties of neuronal networks. The Google matrix G is constructed on the basis of neuronal network of a brain model discussed in PNAS 105 (2008) 3593. We show that the spectrum of eigenvalues of G has a gapless structure with long living relaxation modes. The PageRank of the network becomes delocalized for certain values of the Google damping factor {alpha}. The properties of other eigenstates are also analyzed. We discuss further parallels and similarities between the World Wide Web and neuronal networks.
Towards Google matrix of brain
International Nuclear Information System (INIS)
Shepelyansky, D.L.; Zhirov, O.V.
2010-01-01
We apply the approach of the Google matrix, used in computer science and World Wide Web, to description of properties of neuronal networks. The Google matrix G is constructed on the basis of neuronal network of a brain model discussed in PNAS 105 (2008) 3593. We show that the spectrum of eigenvalues of G has a gapless structure with long living relaxation modes. The PageRank of the network becomes delocalized for certain values of the Google damping factor α. The properties of other eigenstates are also analyzed. We discuss further parallels and similarities between the World Wide Web and neuronal networks.
Inverse Interval Matrix: A Survey
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Farhadsefat, R.
2011-01-01
Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf
Symmetries and Interactions in Matrix String Theory
Hacquebord, F.H.
1999-01-01
This PhD-thesis reviews matrix string theory and recent developments therein. The emphasis is put on symmetries, interactions and scattering processes in the matrix model. We start with an introduction to matrix string theory and a review of the orbifold model that flows out of matrix string theory
Liu, C; Liu, J; Yao, Y X; Wu, P; Wang, C Z; Ho, K M
2016-10-11
We recently proposed the correlation matrix renormalization (CMR) theory to treat the electronic correlation effects [Phys. Rev. B 2014, 89, 045131 and Sci. Rep. 2015, 5, 13478] in ground state total energy calculations of molecular systems using the Gutzwiller variational wave function (GWF). By adopting a number of approximations, the computational effort of the CMR can be reduced to a level similar to Hartree-Fock calculations. This paper reports our recent progress in minimizing the error originating from some of these approximations. We introduce a novel sum-rule correction to obtain a more accurate description of the intersite electron correlation effects in total energy calculations. Benchmark calculations are performed on a set of molecules to show the reasonable accuracy of the method.
Well screening for matrix stimulation treatments
International Nuclear Information System (INIS)
Saavedra, N; Solano, R; Gidley, J; Reyes, C.A; Rodriguez; Kondo, F; Hernandez, J
1998-01-01
Matrix acidizing is a stimulation technique only applicable to wells with surrounding damage. It is therefore very important to differentiate the real formation damage from the damage caused by flow Ni dynamic effects. The mechanical damage corresponds to flow restrictions caused by partial penetration, poor perforation as well as to reduce diameters of the production tubing. The dynamic effects are generated by inertia caused by high flow rates and high-pressure differentials. A common practice in our oil fields is to use a general formulation as acid treatment, most of the times without previous lab studies that guarantee the applicability of the treatment in the formation. Additionally, stimulation is randomly applied even treating undamaged wells with negative results and in the best of the cases, loss of the treatment. The selection of the well for matrix stimulation is an essential factor for the success of the treatment. Selection is done through the evaluation of the skin factor (S) and of the economic benefits of reducing the skin in comparison to the cost of the work. The most appropriate tool for skin evaluation is a good pressure test where the radial flow period can be identified. Nevertheless, we normally find-outdated tests most of the times taken with inaccurate tools. The interpretation problem is worsened by completions in which there is simultaneous production from several sand packages and it is difficult to individually differentiate damage factors. This works states a procedure for the selection of wells appropriate for stimulation; it also proposes a method to evaluate the skin factor when there are no accurate interpretations of the pressure tests. A new and increasingly applied methodology to treat wells with high water cuts, which are usually discarded due to the risk of stimulating water zones, is also mentioned
Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica
2012-05-30
The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Saha, Krishnendu [Ohio Medical Physics Consulting, Dublin, Ohio 43017 (United States); Straus, Kenneth J.; Glick, Stephen J. [Department of Radiology, University of Massachusetts Medical School, Worcester, Massachusetts 01655 (United States); Chen, Yu. [Department of Radiation Oncology, Columbia University, New York, New York 10032 (United States)
2014-08-28
To maximize sensitivity, it is desirable that ring Positron Emission Tomography (PET) systems dedicated for imaging the breast have a small bore. Unfortunately, due to parallax error this causes substantial degradation in spatial resolution for objects near the periphery of the breast. In this work, a framework for computing and incorporating an accurate system matrix into iterative reconstruction is presented in an effort to reduce spatial resolution degradation towards the periphery of the breast. The GATE Monte Carlo Simulation software was utilized to accurately model the system matrix for a breast PET system. A strategy for increasing the count statistics in the system matrix computation and for reducing the system element storage space was used by calculating only a subset of matrix elements and then estimating the rest of the elements by using the geometric symmetry of the cylindrical scanner. To implement this strategy, polar voxel basis functions were used to represent the object, resulting in a block-circulant system matrix. Simulation studies using a breast PET scanner model with ring geometry demonstrated improved contrast at 45% reduced noise level and 1.5 to 3 times resolution performance improvement when compared to MLEM reconstruction using a simple line-integral model. The GATE based system matrix reconstruction technique promises to improve resolution and noise performance and reduce image distortion at FOV periphery compared to line-integral based system matrix reconstruction.
Obena, Rofeamor P; Lin, Po-Chiao; Lu, Ying-Wei; Li, I-Che; del Mundo, Florian; Arco, Susan dR; Nuesca, Guillermo M; Lin, Chung-Chen; Chen, Yu-Ju
2011-12-15
The significance and epidemiological effects of metals to life necessitate the development of direct, efficient, and rapid method of analysis. Taking advantage of its simple, fast, and high-throughput features, we present a novel approach to metal ion detection by matrix-functionalized magnetic nanoparticle (matrix@MNP)-assisted MALDI-MS. Utilizing 21 biologically and environmentally relevant metal ion solutions, the performance of core and matrix@MNP against conventional matrixes in MALDI-MS and laser desorption ionization (LDI) MS were systemically tested to evaluate the versatility of matrix@MNP as ionization element. The matrix@MNPs provided 20- to >100-fold enhancement on detection sensitivity of metal ions and unambiguous identification through characteristic isotope patterns and accurate mass (<5 ppm), which may be attributed to its multifunctional role as metal chelator, preconcentrator, absorber, and reservoir of energy. Together with the comparison on the ionization behaviors of various metals having different ionization potentials (IP), we formulated a metal ionization mechanism model, alluding to the role of exciton pooling in matrix@MNP-assisted MALDI-MS. Moreover, the detection of Cu in spiked tap water demonstrated the practicability of this new approach as an efficient and direct alternative tool for fast, sensitive, and accurate determination of trace metal ions in real samples.
Matrix theory selected topics and useful results
Mehta, Madan Lal
1989-01-01
Matrices and operations on matrices ; determinants ; elementary operations on matrices (continued) ; eigenvalues and eigenvectors, diagonalization of normal matrices ; functions of a matrix ; positive definiteness, various polar forms of a matrix ; special matrices ; matrices with quaternion elements ; inequalities ; generalised inverse of a matrix ; domain of values of a matrix, location and dispersion of eigenvalues ; symmetric functions ; integration over matrix variables ; permanents of doubly stochastic matrices ; infinite matrices ; Alexander matrices, knot polynomials, torsion numbers.
Charge Resolution of the Silicon Matrix of the ATIC Experiment
Zatsepin, V. I.; Adams, J. H., Jr.; Ahn, H. S.; Bashindzhagyan, G. L.; Batkov, K. E.; Case, G.; Christl, M.; Ganel, O.; Fazely, A. R.; Ganel, O.;
2002-01-01
ATIC (Advanced Thin Ionization Calorimeter) is a balloon borne experiment designed to measure the cosmic ray composition for elements from hydrogen to iron and their energy spectra from approx.50 GeV to near 100 TeV. It consists of a Si-matrix detector to determine the charge of a CRT particle, a scintillator hodoscope for tracking, carbon interaction targets and a fully active BGO calorimeter. ATIC had its first flight from McMurdo, Antarctica from 28/12/2000 to 13/01/2001. The ATIC flight collected approximately 25 million events. The silicon matrix of the ATIC spectrometer is designed to resolve individual elements from proton to iron. To provide this resolution careful calibration of each pixel of the silicon matrix is required. Firstly, for each electronic channel of the matrix the pedestal value was subtracted taking into account its drift during the flight. The muon calibration made before the flight was used then to convert electric signals (in ADC channel number) to energy deposits in each pixel. However, the preflight muon calibration was not accurate enough for the purpose, because of lack of statistics in each pixel. To improve charge resolution the correction was done for the position of Helium peak in each pixel during the flight . The other way to set electric signals in electronics channels of the Si-matrix to one scale was correction for electric channel gains accurately measured in laboratory. In these measurements it was found that small different nonlinearities for different channels are present in the region of charge Z > 20. The correction for these non-linearities was not done yet. In linear approximation the method provides practically the same resolution as muon calibration plus He-peak correction. For searching a pixel with the signal of primary particle an indication from the cascade in the calorimeter was used. For this purpose a trajectory was reconstructed using weight centers of energy deposits in BGO layers. The point of intersection
Comparison between phase shift derived and exactly calculated nucleon--nucleon interaction matrix elements
International Nuclear Information System (INIS)
Gregersen, A.W.
1977-01-01
A comparison is made between matrix elements calculated using the uncoupled channel Sussex approach to second order in DWBA and matrix elements calculated using a square well potential. The square well potential illustrated the problem of the determining parameter independence balanced with the concept of phase shift difference. The super-soft core potential was used to discuss the systematics of the Sussex approach as a function of angular momentum as well as the relation between Sussex generated and effective interaction matrix elements. In the uncoupled channels the original Sussex method of extracting effective interaction matrix elements was found to be satisfactory. In the coupled channels emphasis was placed upon the 3 S 1 -- 3 D 1 coupled channel matrix elements. Comparison is made between exactly calculated matrix elements, and matrix elements derived using an extended formulation of the coupled channel Sussex method. For simplicity the potential used is a nonseparable cut-off oscillator. The eigenphases of this potential can be made to approximate the realistic nucleon--nucleon phase shifts at low energies. By using the cut-off oscillator test potential, the original coupled channel Sussex method of determining parameter independence was shown to be incapable of accurately reproducing the exact cut-off oscillator matrix elements. The extended Sussex method was found to be accurate to within 10 percent. The extended method is based upon more general coupled channel DWBA and a noninfinite oscillator wave function solution to the cut-off oscillator auxiliary potential. A comparison is made in the coupled channels between matrix elements generated using the original Sussex method and the extended method. Tables of matrix elements generated using the original uncoupled channel Sussex method and the extended coupled channel Sussex method are presented for all necessary angular momentum channels
Parallel Sparse Matrix - Vector Product
DEFF Research Database (Denmark)
Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd
This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...
Unravelling the nuclear matrix proteome
DEFF Research Database (Denmark)
Albrethsen, Jakob; Knol, Jaco C; Jimenez, Connie R
2009-01-01
The nuclear matrix (NM) model posits the presence of a protein/RNA scaffold that spans the mammalian nucleus. The NM proteins are involved in basic nuclear function and are a promising source of protein biomarkers for cancer. Importantly, the NM proteome is operationally defined as the proteins...
Amorphous metal matrix composite ribbons
International Nuclear Information System (INIS)
Barczy, P.; Szigeti, F.
1998-01-01
Composite ribbons with amorphous matrix and ceramic (SiC, WC, MoB) particles were produced by modified planar melt flow casting methods. Weldability, abrasive wear and wood sanding examinations were carried out in order to find optimal material and technology for elevated wear resistance and sanding durability. The correlation between structure and composite properties is discussed. (author)
Hyper-systolic matrix multiplication
Lippert, Th.; Petkov, N.; Palazzari, P.; Schilling, K.
A novel parallel algorithm for matrix multiplication is presented. It is based on a 1-D hyper-systolic processor abstraction. The procedure can be implemented on all types of parallel systems. (C) 2001 Elsevier Science B,V. All rights reserved.
Matrix Metalloproteinases in Myasthenia Gravis
Helgeland, G.; Petzold, A.F.S.; Luckman, S.P.; Gilhus, N.E.; Plant, G.T.; Romi, F.R.
2011-01-01
Introduction: Myasthenia gravis (MG) is an autoimmune disease with weakness in striated musculature due to anti-acetylcholine receptor (AChR) antibodies or muscle specific kinase at the neuromuscular junction. A subgroup of patients has periocular symptoms only; ocular MG (OMG). Matrix
Concept for Energy Security Matrix
International Nuclear Information System (INIS)
Kisel, Einari; Hamburg, Arvi; Härm, Mihkel; Leppiman, Ando; Ots, Märt
2016-01-01
The following paper presents a discussion of short- and long-term energy security assessment methods and indicators. The aim of the current paper is to describe diversity of approaches to energy security, to structure energy security indicators used by different institutions and papers, and to discuss several indicators that also play important role in the design of energy policy of a state. Based on this analysis the paper presents a novel Energy Security Matrix that structures relevant energy security indicators from the aspects of Technical Resilience and Vulnerability, Economic Dependence and Political Affectability for electricity, heat and transport fuel sectors. Earlier publications by different authors have presented energy security assessment methodologies that use publicly available indicators from different databases. Current paper challenges viability of some of these indicators and introduces new indicators that would deliver stronger energy security policy assessments. Energy Security Matrix and its indicators are based on experiences that the authors have gathered as high-level energy policymakers in Estonia, where all different aspects of energy security can be observed. - Highlights: •Energy security should be analysed in technical, economic and political terms; •Energy Security Matrix provides a framework for energy security analyses; •Applicability of Matrix is limited due to the lack of statistical data and sensitivity of output.
The COMPADRE Plant Matrix Database
DEFF Research Database (Denmark)
2014-01-01
COMPADRE contains demographic information on hundreds of plant species. The data in COMPADRE are in the form of matrix population models and our goal is to make these publicly available to facilitate their use for research and teaching purposes. COMPADRE is an open-access database. We only request...
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří
2013-01-01
Roč. 26, 15 December (2013), s. 836-841 ISSN 1537-9582 Institutional support: RVO:67985807 Keywords : two-matrix alternative * solution * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.514, year: 2013 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol26_pp836-841.pdf
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can
Reactive solute transport in an asymmetrical fracture-rock matrix system
Zhou, Renjie; Zhan, Hongbin
2018-02-01
The understanding of reactive solute transport in a single fracture-rock matrix system is the foundation of studying transport behavior in the complex fractured porous media. When transport properties are asymmetrically distributed in the adjacent rock matrixes, reactive solute transport has to be considered as a coupled three-domain problem, which is more complex than the symmetric case with identical transport properties in the adjacent rock matrixes. This study deals with the transport problem in a single fracture-rock matrix system with asymmetrical distribution of transport properties in the rock matrixes. Mathematical models are developed for such a problem under the first-type and the third-type boundary conditions to analyze the spatio-temporal concentration and mass distribution in the fracture and rock matrix with the help of Laplace transform technique and de Hoog numerical inverse Laplace algorithm. The newly acquired solutions are then tested extensively against previous analytical and numerical solutions and are proven to be robust and accurate. Furthermore, a water flushing phase is imposed on the left boundary of system after a certain time. The diffusive mass exchange along the fracture/rock matrixes interfaces and the relative masses stored in each of three domains (fracture, upper rock matrix, and lower rock matrix) after the water flushing provide great insights of transport with asymmetric distribution of transport properties. This study has the following findings: 1) Asymmetric distribution of transport properties imposes greater controls on solute transport in the rock matrixes. However, transport in the fracture is mildly influenced. 2) The mass stored in the fracture responses quickly to water flushing, while the mass stored in the rock matrix is much less sensitive to the water flushing. 3) The diffusive mass exchange during the water flushing phase has similar patterns under symmetric and asymmetric cases. 4) The characteristic distance
Matrix of regularity for improving the quality of ECGs
International Nuclear Information System (INIS)
Xia, Henian; Garcia, Gabriel A; Zhao, Xiaopeng; Bains, Jujhar; Wortham, Dale C
2012-01-01
The 12-lead electrocardiography (ECG) is the gold standard for diagnosis of abnormalities of the heart. However, the ECG is susceptible to artifacts, which may lead to wrong diagnosis and thus mistreatment. It is a clinical challenge of great significance differentiating ECG artifacts from patterns of diseases. We propose a computational framework, called the matrix of regularity, to evaluate the quality of ECGs. The matrix of regularity is a novel mechanism to fuse results from multiple tests of signal quality. Moreover, this method can produce a continuous grade, which can more accurately represent the quality of an ECG. When tested on a dataset from the Computing in Cardiology/PhysioNet Challenge 2011, the algorithm achieves up to 95% accuracy. The area under the receiver operating characteristic curve is 0.97. The developed framework and computer program have the potential to improve the quality of ECGs collected using conventional and portable devices. (paper)
Effect of matrix cracking and material uncertainty on composite plates
International Nuclear Information System (INIS)
Gayathri, P.; Umesh, K.; Ganguli, R.
2010-01-01
A laminated composite plate model based on first order shear deformation theory is implemented using the finite element method. Matrix cracks are introduced into the finite element model by considering changes in the A, B and D matrices of composites. The effects of different boundary conditions, laminate types and ply angles on the behavior of composite plates with matrix cracks are studied. Finally, the effect of material property uncertainty, which is important for composite material on the composite plate, is investigated using Monte Carlo simulations. Probabilistic estimates of damage detection reliability in composite plates are made for static and dynamic measurements. It is found that the effect of uncertainty must be considered for accurate damage detection in composite structures. The estimates of variance obtained for observable system properties due to uncertainty can be used for developing more robust damage detection algorithms.
Omentin-1 prevents cartilage matrix destruction by regulating matrix metalloproteinases.
Li, Zhigang; Liu, Baoyi; Zhao, Dewei; Wang, BenJie; Liu, Yupeng; Zhang, Yao; Li, Borui; Tian, Fengde
2017-08-01
Matrix metalloproteinases (MMPs) play a crucial role in the degradation of the extracellular matrix and pathological progression of osteoarthritis (OA). Omentin-1 is a newly identified anti-inflammatory adipokine. Little information regarding the protective effects of omentin-1 in OA has been reported before. In the current study, our results indicated that omentin-1 suppressed expression of MMP-1, MMP-3, and MMP-13 induced by the proinflammatory cytokine interleukin-1β (IL-1β) at both the mRNA and protein levels in human chondrocytes. Importantly, administration of omentin-1 abolished IL-1β-induced degradation of type II collagen (Col II) and aggrecan, the two major extracellular matrix components in articular cartilage, in a dose-dependent manner. Mechanistically, omentin-1 ameliorated the expression of interferon regulatory factor 1 (IRF-1) by blocking the JAK-2/STAT3 pathway. Our results indicate that omentin-1 may have a potential chondroprotective therapeutic capacity. Copyright © 2017 Elsevier Masson SAS. All rights reserved.
R-matrix parameters in reactor applications
International Nuclear Information System (INIS)
Hwang, R.N.
1992-01-01
The key role of the resonance phenomena in reactor applications manifests through the self-shielding effect. The basic issue involves the application of the microscopic cross sections in the macroscopic reactor lattices consisting of many nuclides that exhibit resonance behavior. To preserve the fidelity of such a effect requires the accurate calculations of the cross sections and the neutron flux in great detail. This clearly not possible without viable resonance data. Recently released ENDF/B VI resonance data in the resolved range especially reflect the dramatic improvement in two important areas; namely, the significant extension of the resolved resonance ranges accompanied by the availability of the R-matrix parameters of the Reich-Moore type. Aside from the obvious increase in computing time required for the significantly greater number of resonances, the main concern is the compatibility of the Riech-Moore representation to the existing reactor processing codes which, until now, are based on the traditional cross section formalisms. This purpose of this paper is to summarize our recent efforts to facilitate implementation of the proposed methods into the production codes at ANL
q-Virasoro constraints in matrix models
Energy Technology Data Exchange (ETDEWEB)
Nedelin, Anton [Dipartimento di Fisica, Università di Milano-Bicocca and INFN, sezione di Milano-Bicocca, Piazza della Scienza 3, I-20126 Milano (Italy); Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden); Zabzine, Maxim [Department of Physics and Astronomy, Uppsala university,Box 516, SE-75120 Uppsala (Sweden)
2017-03-20
The Virasoro constraints play the important role in the study of matrix models and in understanding of the relation between matrix models and CFTs. Recently the localization calculations in supersymmetric gauge theories produced new families of matrix models and we have very limited knowledge about these matrix models. We concentrate on elliptic generalization of hermitian matrix model which corresponds to calculation of partition function on S{sup 3}×S{sup 1} for vector multiplet. We derive the q-Virasoro constraints for this matrix model. We also observe some interesting algebraic properties of the q-Virasoro algebra.
Immobilization of cellulase using porous polymer matrix
International Nuclear Information System (INIS)
Kumakura, M.; Kaetsu, I.
1984-01-01
A new method is discussed for the immobilization of cellulase using porous polymer matrices, which were obtained by radiation polymerization of hydrophilic monomers. In this method, the immobilized enzyme matrix was prepared by enzyme absorbtion in the porous polymer matrix and drying treatment. The enzyme activity of the immobilized enzyme matrix varied with monomer concentration, cooling rate of the monomer solution, and hydrophilicity of the polymer matrix, takinn the change of the nature of the porous structure in the polymer matrix. The leakage of the enzymes from the polymer matrix was not observed in the repeated batch enzyme reactions
BJUT at TREC 2015 Microblog Track: Real-Time Filtering Using Non-negative Matrix Factorization
2015-11-20
query accurate ambiguity intergration Tweets Vector Preprocessing W-d matrix Feature vector Similarity ranking Recommended twittres Get...recommendation tech- nique based on product category attributes[J]. Expert Systems with Applications, 2009, 36(9): 11480-11488. [5] Sobecki J, Babiak E,Sanina M
Mixed Analog/Digital Matrix-Vector Multiplier for Neural Network Synapses
DEFF Research Database (Denmark)
Lehmann, Torsten; Bruun, Erik; Dietrich, Casper
1996-01-01
In this work we present a hardware efficient matrix-vector multiplier architecture for artificial neural networks with digitally stored synapse strengths. We present a novel technique for manipulating bipolar inputs based on an analog two's complements method and an accurate current rectifier...
Minimal solution for inconsistent singular fuzzy matrix equations
Directory of Open Access Journals (Sweden)
M. Nikuie
2013-10-01
Full Text Available The fuzzy matrix equations $Ailde{X}=ilde{Y}$ is called a singular fuzzy matrix equations while the coefficients matrix of its equivalent crisp matrix equations be a singular matrix. The singular fuzzy matrix equations are divided into two parts: consistent singular matrix equations and inconsistent fuzzy matrix equations. In this paper, the inconsistent singular fuzzy matrix equations is studied and the effect of generalized inverses in finding minimal solution of an inconsistent singular fuzzy matrix equations are investigated.
Giddings, Steven B
2010-01-01
We investigate the hypothesized existence of an S-matrix for gravity, and some of its expected general properties. We first discuss basic questions regarding existence of such a matrix, including those of infrared divergences and description of asymptotic states. Distinct scattering behavior occurs in the Born, eikonal, and strong gravity regimes, and we describe aspects of both the partial wave and momentum space amplitudes, and their analytic properties, from these regimes. Classically the strong gravity region would be dominated by formation of black holes, and we assume its unitary quantum dynamics is described by corresponding resonances. Masslessness limits some powerful methods and results that apply to massive theories, though a continuation path implying crossing symmetry plausibly still exists. Physical properties of gravity suggest nonpolynomial amplitudes, although crossing and causality constrain (with modest assumptions) this nonpolynomial behavior, particularly requiring a polynomial bound in c...
Matrix metalloproteinases in lung biology
Directory of Open Access Journals (Sweden)
Parks William C
2000-12-01
Full Text Available Abstract Despite much information on their catalytic properties and gene regulation, we actually know very little of what matrix metalloproteinases (MMPs do in tissues. The catalytic activity of these enzymes has been implicated to function in normal lung biology by participating in branching morphogenesis, homeostasis, and repair, among other events. Overexpression of MMPs, however, has also been blamed for much of the tissue destruction associated with lung inflammation and disease. Beyond their role in the turnover and degradation of extracellular matrix proteins, MMPs also process, activate, and deactivate a variety of soluble factors, and seldom is it readily apparent by presence alone if a specific proteinase in an inflammatory setting is contributing to a reparative or disease process. An important goal of MMP research will be to identify the actual substrates upon which specific enzymes act. This information, in turn, will lead to a clearer understanding of how these extracellular proteinases function in lung development, repair, and disease.
Structural properties of matrix metalloproteinases.
Bode, W; Fernandez-Catalan, C; Tschesche, H; Grams, F; Nagase, H; Maskos, K
1999-04-01
Matrix metalloproteinases (MMPs) are involved in extracellular matrix degradation. Their proteolytic activity must be precisely regulated by their endogenous protein inhibitors, the tissue inhibitors of metalloproteinases (TIMPs). Disruption of this balance results in serious diseases such as arthritis, tumour growth and metastasis. Knowledge of the tertiary structures of the proteins involved is crucial for understanding their functional properties and interference with associated dysfunctions. Within the last few years, several three-dimensional MMP and MMP-TIMP structures became available, showing the domain organization, polypeptide fold and main specificity determinants. Complexes of the catalytic MMP domains with various synthetic inhibitors enabled the structure-based design and improvement of high-affinity ligands, which might be elaborated into drugs. A multitude of reviews surveying work done on all aspects of MMPs have appeared in recent years, but none of them has focused on the three-dimensional structures. This review was written to close the gap.
Accurate formulas for the penalty caused by interferometric crosstalk
DEFF Research Database (Denmark)
Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle
2000-01-01
New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....
A new, accurate predictive model for incident hypertension
DEFF Research Database (Denmark)
Völzke, Henry; Fung, Glenn; Ittermann, Till
2013-01-01
Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....
Accurate and Simple Calibration of DLP Projector Systems
DEFF Research Database (Denmark)
Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus
2014-01-01
does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...
Accurate Compton scattering measurements for N{sub 2} molecules
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, Kohjiro [Advanced Technology Research Center, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan); Itou, Masayoshi; Tsuji, Naruki; Sakurai, Yoshiharu [Japan Synchrotron Radiation Research Institute (JASRI), 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198 (Japan); Hosoya, Tetsuo; Sakurai, Hiroshi, E-mail: sakuraih@gunma-u.ac.jp [Department of Production Science and Technology, Gunma University, 29-1 Hon-cho, Ota, Gunma 373-0057 (Japan)
2011-06-14
The accurate Compton profiles of N{sub 2} gas were measured using 121.7 keV synchrotron x-rays. The present accurate measurement proves the better agreement of the CI (configuration interaction) calculation than the Hartree-Fock calculation and suggests the importance of multi-excitation in the CI calculations for the accuracy of wavefunctions in ground states.
Random matrix improved subspace clustering
Couillet, Romain
2017-03-06
This article introduces a spectral method for statistical subspace clustering. The method is built upon standard kernel spectral clustering techniques, however carefully tuned by theoretical understanding arising from random matrix findings. We show in particular that our method provides high clustering performance while standard kernel choices provably fail. An application to user grouping based on vector channel observations in the context of massive MIMO wireless communication networks is provided.
Coherence matrix of plasmonic beams
DEFF Research Database (Denmark)
Novitsky, Andrey; Lavrinenko, Andrei
2013-01-01
We consider monochromatic electromagnetic beams of surface plasmon-polaritons created at interfaces between dielectric media and metals. We theoretically study non-coherent superpositions of elementary surface waves and discuss their spectral degree of polarization, Stokes parameters, and the for...... of the spectral coherence matrix. We compare the polarization properties of the surface plasmonspolaritons as three-dimensional and two-dimensional fields concluding that the latter is superior....
The Biblical Matrix of Economics
Grigore PIROŞCĂ; Angela ROGOJANU
2012-01-01
The rationale of this paper is a prime pattern of history of economic thought in the previous ages of classic ancient times of Greek and Roman civilizations using a methodological matrix able to capture the mainstream ideas from social, political and religious events within the pages of Bible. The economic perspective of these events follows the evolution of the seeds of economic thinking within the Fertile Crescent, focused on the Biblical patriarchic heroes’ actions, but a...
The Euclid Statistical Matrix Tool
Directory of Open Access Journals (Sweden)
Curtis Tilves
2017-06-01
Full Text Available Stataphobia, a term used to describe the fear of statistics and research methods, can result from a lack of improper training in statistical methods. Poor statistical methods training can have an effect on health policy decision making and may play a role in the low research productivity seen in developing countries. One way to reduce Stataphobia is to intervene in the teaching of statistics in the classroom; however, such an intervention must tackle several obstacles, including student interest in the material, multiple ways of learning materials, and language barriers. We present here the Euclid Statistical Matrix, a tool for combatting Stataphobia on a global scale. This free tool is comprised of popular statistical YouTube channels and web sources that teach and demonstrate statistical concepts in a variety of presentation methods. Working with international teams in Iran, Japan, Egypt, Russia, and the United States, we have also developed the Statistical Matrix in multiple languages to address language barriers to learning statistics. By utilizing already-established large networks, we are able to disseminate our tool to thousands of Farsi-speaking university faculty and students in Iran and the United States. Future dissemination of the Euclid Statistical Matrix throughout the Central Asia and support from local universities may help to combat low research productivity in this region.
Appendices 1-3 - the effects of combustion on ash and deposits from low rank coals
Energy Technology Data Exchange (ETDEWEB)
Ledger, R.C.; Ottrey, A.L.; Mackay, G.H.
1985-12-01
Thermomechanical analyses (TMA) of ashes derived from combustion of fourteen coal samples from Victorian and South Australian coalfields are presented in the results volumes of this report (Volume 2-4). This appendix describes the analytical equipment used, the modifications that were incorporated and the technique developed for analysis and interpretation of the data. To aid identification, limited numbers of analyses were performed on reference materials, the results of which are presented in this appendix. Analyses were performed on a modified Stanton Redcroft 790 series thermomechanical analyser. The aim was to identify components in the ashes and to gain an understanding of the sintering and fusion behaviour of the ashes up to temperatures encountered in large scale boilers. As part of the main project, ashes were also submitted to simultaneous Differential Thermal Analysis and Thermogravimetry (DTA-TG). For each coal burnt in this investigation the Test Bank 1 and precipitator ashes produced at a flame temperature of 1200/sup o/C and 3% excess oxygen were examined by TMA, as were ashes from tests at other flame temperatures and at 3% excess oxygen for four of the coals. This was to investigate the effects of variation in combustion conditions on ash properties. The results are presented in Volume 2-4 of this report as tables, giving details of events and assignments and as a formalised TMA pattern for each ash tested.
Low rank factorization of the Coulomb integrals for periodic coupled cluster theory.
Hummel, Felix; Tsatsoulis, Theodoros; Grüneis, Andreas
2017-03-28
We study a tensor hypercontraction decomposition of the Coulomb integrals of periodic systems where the integrals are factorized into a contraction of six matrices of which only two are distinct. We find that the Coulomb integrals can be well approximated in this form already with small matrices compared to the number of real space grid points. The cost of computing the matrices scales as O(N 4 ) using a regularized form of the alternating least squares algorithm. The studied factorization of the Coulomb integrals can be exploited to reduce the scaling of the computational cost of expensive tensor contractions appearing in the amplitude equations of coupled cluster methods with respect to system size. We apply the developed methodologies to calculate the adsorption energy of a single water molecule on a hexagonal boron nitride monolayer in a plane wave basis set and periodic boundary conditions.
The economic case for industrial application of low-rank coal technology
International Nuclear Information System (INIS)
Irwin, W.
1991-01-01
The World Coal Institute estimates coal should overtake oil as the world's largest source of primary energy by the turn of the century. Current world coal production of 3.6 billion tons in 1990 is predicted to rise to 4 billion tons by the year 2000. It is conceded that a major environmental problem with burning coal is the so-called greenhouse effect. The question is how do you use the new technologies that have been developed which now allow coal to be burned with minimum damage to the environment. Despite their technical merits, acceptance of these new technologies is slow because they appear uncompetitive when compared with historic energy costs. Unless economic comparisons include some form of environmental evaluation, this issue will continue to be a barrier to progress. To avoid stagnation and provide the necessary incentive for implementing badly needed change, structural changes in energy economics need to be made which take into account the environmental cost element of these emerging new technologies. The paper discusses coal trade and quality and then describes the three main areas of development of clean coal technologies: coal preparation, combustion, and flue gas treatment
Low-rank coal study: national needs for resource development. Volume 6. Peat
Energy Technology Data Exchange (ETDEWEB)
1980-11-01
The requirements and potential for development of US peat resources for energy use are reviewed. Factors analyzed include the occurrence and properties of major peat deposits; technologies for extraction, dewatering, preparation, combustion, and conversion of peat to solid, liquid, or gaseous fuels; environmental, regulatory, and market constraints; and research, development, and demonstration (RD and D) needs. Based on a review of existing research efforts, recommendations are made for a comprehensive national RD and D program to enhance the use of peat as an energy source.
A Study of Recognition of the Lesser Achievements of Low Ranking Enlisted Men
1975-06-06
Abraham Maslow are physiological, safety, belongingness and love, self - esteem and self -actualization.2 Considering them In order, the first two...LITERATURE BEHAVIORAL CONSIDERATIONS Self -Esteepu Abraham Maslow lists man’s basic needs as being: physiological, safety, belonglngness and love... Esteem needs are felt to be of central importance by psychoanalysts and clinical psychologists. In the words of Maslow , «. . .satisfaction of the self
Advanced Acid Gas Separation Technology for the Utilization of Low Rank Coals
Energy Technology Data Exchange (ETDEWEB)
Kloosterman, Jeff
2012-12-31
Air Products has developed a potentially ground-breaking technology – Sour Pressure Swing Adsorption (PSA) – to replace the solvent-based acid gas removal (AGR) systems currently employed to separate sulfur containing species, along with CO{sub 2} and other impurities, from gasifier syngas streams. The Sour PSA technology is based on adsorption processes that utilize pressure swing or temperature swing regeneration methods. Sour PSA technology has already been shown with higher rank coals to provide a significant reduction in the cost of CO{sub 2} capture for power generation, which should translate to a reduction in cost of electricity (COE), compared to baseline CO{sub 2} capture plant design. The objective of this project is to test the performance and capability of the adsorbents in handling tar and other impurities using a gaseous mixture generated from the gasification of lower rank, lignite coal. The results of this testing are used to generate a high-level pilot process design, and to prepare a techno-economic assessment evaluating the applicability of the technology to plants utilizing these coals.
Co-combustion of low rank coal/waste biomass blends using dry air or oxygen
International Nuclear Information System (INIS)
Haykiri-Acma, H.; Yaman, S.; Kucukbayrak, S.
2013-01-01
Biomass species such as the rice husk and the olive milling residue, and a low quality Turkish coal, Soma Denis lignite, were burned in a thermal analyzer under pure oxygen and dry air up to 900 °C, and differential thermal analysis (DTA) and derivative thermogravimetric (DTG) analysis profiles were obtained. Co-combustion experiments of lignite/biomass blends containing 5–20 wt% of biomass were also performed. The effects of the oxidizer type and the blending ratio of biomass were evaluated considering some thermal reactivity indicators such as the maximum burning rate and its temperature, the maximum heat flow temperature, and the burnout levels. FTIR (Fourier transform infrared) spectroscopy and SEM (scanning electron microscopy) were used to characterize the samples, and the variations in the combustion characteristics of the samples were interpreted based on the differences in the intrinsic properties of the samples. - Highlights: ► Co-combustion of lignite/biomass blends. ► The effects of the oxidizer type and the blending ratio. ► Effects of intrinsic properties on combustion characteristics.
Conversion of Low-Rank Wyoming Coals into Gasoline by Direct Liquefaction
Energy Technology Data Exchange (ETDEWEB)
Polyakov, Oleg
2013-12-31
Under the cooperative agreement program of DOE and funding from Wyoming State’s Clean Coal Task Force, Western Research Institute and Thermosolv LLC studied the direct conversion of Wyoming coals and coal-lignin mixed feeds into liquid fuels in conditions highly relevant to practice. During the Phase I, catalytic direct liquefaction of sub-bituminous Wyoming coals was investigated. The process conditions and catalysts were identified that lead to a significant increase of desirable oil fraction in the products. The Phase II work focused on systematic study of solvothermal depolymerization (STD) and direct liquefaction (DCL) of carbonaceous feedstocks. The effect of the reaction conditions (the nature of solvent, solvent/lignin ratio, temperature, pressure, heating rate, and residence time) on STD was investigated. The effect of a number of various additives (including lignin, model lignin compounds, lignin-derivable chemicals, and inorganic radical initiators), solvents, and catalysts on DCL has been studied. Although a significant progress has been achieved in developing solvothermal depolymerization, the side reactions – formation of considerable amounts of char and gaseous products – as well as other drawbacks do not render aqueous media as the most appropriate choice for commercial implementation of STD for processing coals and lignins. The trends and effects discovered in DCL point at the specific features of liquefaction mechanism that are currently underutilized yet could be exploited to intensify the process. A judicious choice of catalysts, solvents, and additives might enable practical and economically efficient direct conversion of Wyoming coals into liquid fuels.
CSIR Research Space (South Africa)
Oboirien, BO
2013-02-01
Full Text Available Coal biosolubilisation was investigated in stirred tank reactor, fluidised bed and fixed bed bioreactors with a view to highlight the advantages and shortcomings of each of these reactor configurations. The stirred aerated bioreactor and fluidised...
Research on Improving Low Rank Coal Caking Ability by Moderate Hydrogenation
Huang, Peng
2017-12-01
The hydrogenation test of low metamorphic coal was carried out by using a continuous hydrogen reactor at the temperature of (350-400)°C and the initial hydrogen pressure of 3 ~ 6Mpa. The purpose of the experiment was to increase the caking property, and the heating time was controlled from 30 to 50min. The test results show that the mild hydrogenation test, no adhesion of low metamorphic coal can be transformed into a product having adhesion, oxygen elements in coal have good removal, the calorific value of the product has been improved significantly and coal particles during pyrolysis, swelling, catalyst, hydrogenation, structural changes and the combined effects of particles a new component formed between financial and is a major cause of coal caking enhancement and lithofacies change, coal blending test showed that the product can be used effectively in the coking industry.
Liquid CO_{2}/Coal Slurry for Feeding Low Rank Coal to Gasifiers
Energy Technology Data Exchange (ETDEWEB)
Marasigan, Jose [Electric Power Research Institute, Inc., Palo Alto, CA (United States); Goldstein, Harvey [Electric Power Research Institute, Inc., Palo Alto, CA (United States); Dooher, John [Electric Power Research Institute, Inc., Palo Alto, CA (United States)
2013-09-30
This study investigates the practicality of using a liquid CO_{2}/coal slurry preparation and feed system for the E-Gas™ gasifier in an integrated gasification combined cycle (IGCC) electric power generation plant configuration. Liquid CO_{2} has several property differences from water that make it attractive for the coal slurries used in coal gasification-based power plants. First, the viscosity of liquid CO_{2} is much lower than water. This means it should take less energy to pump liquid CO_{2} through a pipe compared to water. This also means that a higher solids concentration can be fed to the gasifier, which should decrease the heat requirement needed to vaporize the slurry. Second, the heat of vaporization of liquid CO_{2} is about 80% lower than water. This means that less heat from the gasification reactions is needed to vaporize the slurry. This should result in less oxygen needed to achieve a given gasifier temperature. And third, the surface tension of liquid CO_{2} is about 2 orders of magnitude lower than water, which should result in finer atomization of the liquid CO_{2} slurry, faster reaction times between the oxygen and coal particles, and better carbon conversion at the same gasifier temperature. EPRI and others have recognized the potential that liquid CO_{2} has in improving the performance of an IGCC plant and have previously conducted systemslevel analyses to evaluate this concept. These past studies have shown that a significant increase in IGCC performance can be achieved with liquid CO_{2} over water with certain gasifiers. Although these previous analyses had produced some positive results, they were still based on various assumptions for liquid CO_{2}/coal slurry properties.
Low-rank coal research semiannual report, January 1992--June 1992
Energy Technology Data Exchange (ETDEWEB)
1992-12-31
This semiannual report is a compilation of seventeen reports on ongoing coal research at the University of North Dakota. The following research areas are covered: control technology and coal preparation; advanced research and technology development; combustion; liquefaction and gasification. Individual papers have been processed separately for inclusion in the Energy Science and Technology Database.
Poortvliet, P. Marijn; Janssen, Onne; Van Yperen, N.W.; Van de Vliert, E.
This investigation tested the joint effect of achievement goals and ranking information on information exchange intentions with a commensurate exchange partner. Results showed that individuals with performance goals were less inclined to cooperate with an exchange partner when they had low or high
Opportunities in low-rank coal applications for synfuels and power industries in Mexico
International Nuclear Information System (INIS)
Winch, R.A.; Alejandro, I.; Hernandez, G.
1992-01-01
The utilization of domestic coal is an important ingredient in the generation strategy of electricity in Mexico. The relative ranking of the MICARE and Sabinas coals, compared to other coals tested at the Energy and Environmental Research Center (EERC) pilot test facility at Grand Forks is an important factor for future economic fuel studies. A test comparison between US and Mexican coals was made and observations are listed
Modeling of pseudoacoustic P-waves in orthorhombic media with a low-rank approximation
Song, Xiaolei; Alkhalifah, Tariq Ali
2013-01-01
Wavefield extrapolation in pseudoacoustic orthorhombic anisotropic media suffers from wave-mode coupling and stability limitations in the parameter range. We use the dispersion relation for scalar wave propagation in pseudoacoustic orthorhombic
Numerical solution of quadratic matrix equations for free vibration analysis of structures
Gupta, K. K.
1975-01-01
This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.
Analytical method comparisons for the accurate determination of PCBs in sediments
Energy Technology Data Exchange (ETDEWEB)
Numata, M.; Yarita, T.; Aoyagi, Y.; Yamazaki, M.; Takatsu, A. [National Metrology Institute of Japan, Tsukuba (Japan)
2004-09-15
National Metrology Institute of Japan in National Institute of Advanced Industrial Science and Technology (NMIJ/AIST) has been developing several matrix reference materials, for example, sediments, water and biological tissues, for the determinations of heavy metals and organometallic compounds. The matrix compositions of those certified reference materials (CRMs) are similar to compositions of actual samples, and those are useful for validating analytical procedures. ''Primary methods of measurements'' are essential to obtain accurate and SI-traceable certified values in the reference materials, because the methods have the highest quality of measurement. However, inappropriate analytical operations, such as incomplete extraction of analytes or crosscontamination during analytical procedures, will cause error of analytical results, even if one of the primary methods, isotope-dilution, is utilized. To avoid possible procedural bias for the certification of reference materials, we employ more than two analytical methods which have been optimized beforehand. Because the accurate determination of trace POPs in the environment is important to evaluate their risk, reliable CRMs are required by environmental chemists. Therefore, we have also been preparing matrix CRMs for the determination of POPs. To establish accurate analytical procedures for the certification of POPs, extraction is one of the critical steps as described above. In general, conventional extraction techniques for the determination of POPs, such as Soxhlet extraction (SOX) and saponification (SAP), have been characterized well, and introduced as official methods for environmental analysis. On the other hand, emerging techniques, such as microwave-assisted extraction (MAE), pressurized fluid extraction (PFE) and supercritical fluid extraction (SFE), give higher recovery yields of analytes with relatively short extraction time and small amount of solvent, by reasons of the high
Redesigning Triangular Dense Matrix Computations on GPUs
Charara, Ali; Ltaief, Hatem; Keyes, David E.
2016-01-01
A new implementation of the triangular matrix-matrix multiplication (TRMM) and the triangular solve (TRSM) kernels are described on GPU hardware accelerators. Although part of the Level 3 BLAS family, these highly computationally intensive kernels
Analytic matrix elements with shifted correlated Gaussians
DEFF Research Database (Denmark)
Fedorov, D. V.
2017-01-01
Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics.......Matrix elements between shifted correlated Gaussians of various potentials with several form-factors are calculated analytically. Analytic matrix elements are of importance for the correlated Gaussian method in quantum few-body physics....
A quenched c = 1 critical matrix model
International Nuclear Information System (INIS)
Qiu, Zongan; Rey, Soo-Jong.
1990-12-01
We study a variant of the Penner-Distler-Vafa model, proposed as a c = 1 quantum gravity: 'quenched' matrix model with logarithmic potential. The model is exactly soluble, and exhibits a two-cut branching as observed in multicritical unitary matrix models and multicut Hermitian matrix models. Using analytic continuation of the power in the conventional polynomial potential, we also show that both the Penner-Distler-Vafa model and our 'quenched' matrix model satisfy Virasoro algebra constraints
Accurate localization of intracavitary brachytherapy applicators from 3D CT imaging studies
International Nuclear Information System (INIS)
Lerma, F.A.; Williamson, J.F.
2002-01-01
Purpose: To present an accurate method to identify the positions and orientations of intracavitary (ICT) brachytherapy applicators imaged in 3D CT scans, in support of Monte Carlo photon-transport simulations, enabling accurate dose modeling in the presence of applicator shielding and interapplicator attenuation. Materials and methods: The method consists of finding the transformation that maximizes the coincidence between the known 3D shapes of each applicator component (colpostats and tandem) with the volume defined by contours of the corresponding surface on each CT slice. We use this technique to localize Fletcher-Suit CT-compatible applicators for three cervix cancer patients using post-implant CT examinations (3 mm slice thickness and separation). Dose distributions in 1-to-1 registration with the underlying CT anatomy are derived from 3D Monte Carlo photon-transport simulations incorporating each applicator's internal geometry (source encapsulation, high-density shields, and applicator body) oriented in relation to the dose matrix according to the measured localization transformations. The precision and accuracy of our localization method are assessed using CT scans, in which the positions and orientations of dense rods and spheres (in a precision-machined phantom) were measured at various orientations relative to the gantry. Results: Using this method, we register 3D Monte Carlo dose calculations directly onto post insertion patient CT studies. Using CT studies of a precisely machined phantom, the absolute accuracy of the method was found to be ±0.2 mm in plane, and ±0.3 mm in the axial direction while its precision was ±0.2 mm in plane, and ±0.2 mm axially. Conclusion: We have developed a novel, and accurate technique to localize intracavitary brachytherapy applicators in 3D CT imaging studies, which supports 3D dose planning involving detailed 3D Monte Carlo dose calculations, modeling source positions, shielding and interapplicator shielding
Confocal microscopy imaging of the biofilm matrix
DEFF Research Database (Denmark)
Schlafer, Sebastian; Meyer, Rikke L
2017-01-01
The extracellular matrix is an integral part of microbial biofilms and an important field of research. Confocal laser scanning microscopy is a valuable tool for the study of biofilms, and in particular of the biofilm matrix, as it allows real-time visualization of fully hydrated, living specimens...... the concentration of solutes and the diffusive properties of the biofilm matrix....
Matrix algebra for higher order moments
Meijer, Erik
2005-01-01
A large part of statistics is devoted to the estimation of models from the sample covariance matrix. The development of the statistical theory and estimators has been greatly facilitated by the introduction of special matrices, such as the commutation matrix and the duplication matrix, and the
MatrixPlot: visualizing sequence constraints
DEFF Research Database (Denmark)
Gorodkin, Jan; Stærfeldt, Hans Henrik; Lund, Ole
1999-01-01
MatrixPlot: visualizing sequence constraints. Sub-title Abstract Summary : MatrixPlot is a program for making high-quality matrix plots, such as mutual information plots of sequence alignments and distance matrices of sequences with known three-dimensional coordinates. The user can add information...
Ellipsoids and matrix-valued valuations
Ludwig, Monika
2003-01-01
We obtain a classification of Borel measurable, GL(n) covariant, symmetric-matrix-valued valuations on the space of n-dimensional convex polytopes. The only ones turn out to be the moment matrix corresponding to the classical Legendre ellipsoid and the matrix corresponding to the ellipsoid recently discovered by E. Lutwak, D. Yang, and G. Zhang.
Construction of covariance matrix for experimental data
International Nuclear Information System (INIS)
Liu Tingjin; Zhang Jianhua
1992-01-01
For evaluators and experimenters, the information is complete only in the case when the covariance matrix is given. The covariance matrix of the indirectly measured data has been constructed and discussed. As an example, the covariance matrix of 23 Na(n, 2n) cross section is constructed. A reasonable result is obtained
Accurately bearing measurement in non-cooperative passive location system
International Nuclear Information System (INIS)
Liu Zhiqiang; Ma Hongguang; Yang Lifeng
2007-01-01
The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)
The COMPADRE Plant Matrix Database
DEFF Research Database (Denmark)
Salguero-Gomez, Roberto; Jones, Owen; Archer, C. Ruth
2015-01-01
growth or decline, such data furthermore help us understand how different biomes shape plant ecology, how plant populations and communities respond to global change, and how to develop successful management tools for endangered or invasive species. 2. Matrix population models summarize the life cycle......1. Schedules of survival, growth and reproduction are key life history traits. Data on how these traits vary among species and populations are fundamental to our understanding of the ecological conditions that have shaped plant evolution. Because these demographic schedules determine population...
Hexagonal response matrix using symmetries
International Nuclear Information System (INIS)
Gotoh, Y.
1991-01-01
A response matrix for use in core calculations for nuclear reactors with hexagonal fuel assemblies is presented. It is based on the incoming currents averaged over the half-surface of a hexagonal node by applying symmetry theory. The boundary conditions of the incoming currents on the half-surface of the node are expressed by a complete set of orthogonal vectors which are constructed from symmetrized functions. The expansion coefficients of the functions are determined by the boundary conditions of incoming currents. (author)
Distributively generated matrix near rings
International Nuclear Information System (INIS)
Abbasi, S.J.
1993-04-01
It is known that if R is a near ring with identity then (I,+) is abelian if (I + ,+) is abelian and (I,+) is abelian if (I*,+) is abelian [S.J. Abbasi, J.D.P. Meldrum, 1991]. This paper extends these results. We show that if R is a distributively generated near ring with identity then (I,+) is included in Z(R), the center of R, if (I + ,+) is included in Z(M n (R)), the center of matrix near ring M n (R). Furthermore (I,+) is included in Z(R) if (I*,+) is included in Z(M n (R)). (author). 5 refs
Geometric phase from dielectric matrix
International Nuclear Information System (INIS)
Banerjee, D.
2005-10-01
The dielectric property of the anisotropic optical medium is found by considering the polarized photon as two component spinor of spherical harmonics. The Geometric Phase of a polarized photon has been evaluated in two ways: the phase two-form of the dielectric matrix through a twist and the Pancharatnam phase (GP) by changing the angular momentum of the incident polarized photon over a closed triangular path on the extended Poincare sphere. The helicity in connection with the spin angular momentum of the chiral photon plays the key role in developing these phase holonomies. (author)
Matrix regularization of 4-manifolds
Trzetrzelewski, M.
2012-01-01
We consider products of two 2-manifolds such as S^2 x S^2, embedded in Euclidean space and show that the corresponding 4-volume preserving diffeomorphism algebra can be approximated by a tensor product SU(N)xSU(N) i.e. functions on a manifold are approximated by the Kronecker product of two SU(N) matrices. A regularization of the 4-sphere is also performed by constructing N^2 x N^2 matrix representations of the 4-algebra (and as a byproduct of the 3-algebra which makes the regularization of S...
Random Matrix Theory and Econophysics
Rosenow, Bernd
2000-03-01
Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory
DEFF Research Database (Denmark)
Eklund, Aron Charles; Friis, Pia; Wernersson, Rasmus
2010-01-01
BLASTN accuracy by modifying the substitution matrix and gap penalties. We generated gene expression microarray data for samples in which 1 or 10% of the target mass was an exogenous spike of known sequence. We found that the 10% spike induced 2-fold intensity changes in 3% of the probes, two......-third of which were decreases in intensity likely caused by bulk-hybridization. These changes were correlated with similarity between the spike and probe sequences. Interestingly, even very weak similarities tended to induce a change in probe intensity with the 10% spike. Using this data, we optimized the BLASTN...... substitution matrix to more accurately identify probes susceptible to non-specific hybridization with the spike. Relative to the default substitution matrix, the optimized matrix features a decreased score for A–T base pairs relative to G–C base pairs, resulting in a 5–15% increase in area under the ROC curve...
Matrix Approach of Seismic Wave Imaging: Application to Erebus Volcano
Blondel, T.; Chaput, J.; Derode, A.; Campillo, M.; Aubry, A.
2017-12-01
This work aims at extending to seismic imaging a matrix approach of wave propagation in heterogeneous media, previously developed in acoustics and optics. More specifically, we will apply this approach to the imaging of the Erebus volcano in Antarctica. Volcanoes are actually among the most challenging media to explore seismically in light of highly localized and abrupt variations in density and wave velocity, extreme topography, extensive fractures, and the presence of magma. In this strongly scattering regime, conventional imaging methods suffer from the multiple scattering of waves. Our approach experimentally relies on the measurement of a reflection matrix associated with an array of geophones located at the surface of the volcano. Although these sensors are purely passive, a set of Green's functions can be measured between all pairs of geophones from ice-quake coda cross-correlations (1-10 Hz) and forms the reflection matrix. A set of matrix operations can then be applied for imaging purposes. First, the reflection matrix is projected, at each time of flight, in the ballistic focal plane by applying adaptive focusing at emission and reception. It yields a response matrix associated with an array of virtual geophones located at the ballistic depth. This basis allows us to get rid of most of the multiple scattering contribution by applying a confocal filter to seismic data. Iterative time reversal is then applied to detect and image the strongest scatterers. Mathematically, it consists in performing a singular value decomposition of the reflection matrix. The presence of a potential target is assessed from a statistical analysis of the singular values, while the corresponding eigenvectors yield the corresponding target images. When stacked, the results obtained at each depth give a three-dimensional image of the volcano. While conventional imaging methods lead to a speckle image with no connection to the actual medium's reflectivity, our method enables to
Bad Clade Deletion Supertrees: A Fast and Accurate Supertree Algorithm.
Fleischauer, Markus; Böcker, Sebastian
2017-09-01
Supertree methods merge a set of overlapping phylogenetic trees into a supertree containing all taxa of the input trees. The challenge in supertree reconstruction is the way of dealing with conflicting information in the input trees. Many different algorithms for different objective functions have been suggested to resolve these conflicts. In particular, there exist methods based on encoding the source trees in a matrix, where the supertree is constructed applying a local search heuristic to optimize the respective objective function. We present a novel heuristic supertree algorithm called Bad Clade Deletion (BCD) supertrees. It uses minimum cuts to delete a locally minimal number of columns from such a matrix representation so that it is compatible. This is the complement problem to Matrix Representation with Compatibility (Maximum Split Fit). Our algorithm has guaranteed polynomial worst-case running time and performs swiftly in practice. Different from local search heuristics, it guarantees to return the directed perfect phylogeny for the input matrix, corresponding to the parent tree of the input trees, if one exists. Comparing supertrees to model trees for simulated data, BCD shows a better accuracy (F1 score) than the state-of-the-art algorithms SuperFine (up to 3%) and Matrix Representation with Parsimony (up to 7%); at the same time, BCD is up to 7 times faster than SuperFine, and up to 600 times faster than Matrix Representation with Parsimony. Finally, using the BCD supertree as a starting tree for a combined Maximum Likelihood analysis using RAxML, we reach significantly improved accuracy (1% higher F1 score) and running time (1.7-fold speedup). © The Author 2017. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.
A Time--Independent Born--Oppenheimer Approximation with Exponentially Accurate Error Estimates
Hagedorn, G A
2004-01-01
We consider a simple molecular--type quantum system in which the nuclei have one degree of freedom and the electrons have two levels. The Hamiltonian has the form \\[ H(\\epsilon)\\ =\\ -\\,\\frac{\\epsilon^4}2\\, \\frac{\\partial^2\\phantom{i}}{\\partial y^2}\\ +\\ h(y), \\] where $h(y)$ is a $2\\times 2$ real symmetric matrix. Near a local minimum of an electron level ${\\cal E}(y)$ that is not at a level crossing, we construct quasimodes that are exponentially accurate in the square of the Born--Oppenheimer parameter $\\epsilon$ by optimal truncation of the Rayleigh--Schr\\"odinger series. That is, we construct $E_\\epsilon$ and $\\Psi_\\epsilon$, such that $\\|\\Psi_\\epsilon\\|\\,=\\,O(1)$ and \\[ \\|\\,(H(\\epsilon)\\,-\\,E_\\epsilon))\\,\\Psi_\\epsilon\\,\\|\\ 0. \\
Accurate modeling and maximum power point detection of ...
African Journals Online (AJOL)
Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.
Accurate determination of light elements by charged particle activation analysis
International Nuclear Information System (INIS)
Shikano, K.; Shigematsu, T.
1989-01-01
To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)
ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.
Importance of molecular diagnosis in the accurate diagnosis of ...
Indian Academy of Sciences (India)
1Department of Health and Environmental Sciences, Kyoto University Graduate School of Medicine, Yoshida Konoecho, ... of molecular diagnosis in the accurate diagnosis of systemic carnitine deficiency. .... 'affecting protein function' by SIFT.