WorldWideScience

Sample records for graph kernel functions

  1. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  2. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-20

    Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.

  3. Aligning Biomolecular Networks Using Modular Graph Kernels

    Science.gov (United States)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  4. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  5. The depression of a graph and k-kernels

    Directory of Open Access Journals (Sweden)

    Schurch Mark

    2014-05-01

    Full Text Available An edge ordering of a graph G is an injection f : E(G → R, the set of real numbers. A path in G for which the edge ordering f increases along its edge sequence is called an f-ascent ; an f-ascent is maximal if it is not contained in a longer f-ascent. The depression of G is the smallest integer k such that any edge ordering f has a maximal f-ascent of length at most k. A k-kernel of a graph G is a set of vertices U ⊆ V (G such that for any edge ordering f of G there exists a maximal f-ascent of length at most k which neither starts nor ends in U. Identifying a k-kernel of a graph G enables one to construct an infinite family of graphs from G which have depression at most k. We discuss various results related to the concept of k-kernels, including an improved upper bound for the depression of trees.

  6. On defining and computing fuzzy kernels on L-valued simple graphs

    International Nuclear Information System (INIS)

    Bisdorff, R.; Roubens, M.

    1996-01-01

    In this paper we introduce the concept of fuzzy kernels defined on valued-finite simple graphs in a sense close to fuzzy preference modelling. First we recall the classic concept of kernel associated with a crisp binary relation defined on a finite set. In a second part, we introduce fuzzy binary relations. In a third part, we generalize the crisp kernel concept to such fuzzy binary relations and in a last part, we present an application to fuzzy choice functions on fuzzy outranking relations

  7. Text categorization of biomedical data sets using graph kernels and a controlled vocabulary.

    Science.gov (United States)

    Bleik, Said; Mishra, Meenakshi; Huan, Jun; Song, Min

    2013-01-01

    Recently, graph representations of text have been showing improved performance over conventional bag-of-words representations in text categorization applications. In this paper, we present a graph-based representation for biomedical articles and use graph kernels to classify those articles into high-level categories. In our representation, common biomedical concepts and semantic relationships are identified with the help of an existing ontology and are used to build a rich graph structure that provides a consistent feature set and preserves additional semantic information that could improve a classifier's performance. We attempt to classify the graphs using both a set-based graph kernel that is capable of dealing with the disconnected nature of the graphs and a simple linear kernel. Finally, we report the results comparing the classification performance of the kernel classifiers to common text-based classifiers.

  8. Contextual Weisfeiler-Lehman Graph Kernel For Malware Detection

    OpenAIRE

    Narayanan, Annamalai; Meng, Guozhu; Yang, Liu; Liu, Jinliang; Chen, Lihui

    2016-01-01

    In this paper, we propose a novel graph kernel specifically to address a challenging problem in the field of cyber-security, namely, malware detection. Previous research has revealed the following: (1) Graph representations of programs are ideally suited for malware detection as they are robust against several attacks, (2) Besides capturing topological neighbourhoods (i.e., structural information) from these graphs it is important to capture the context under which the neighbourhoods are reac...

  9. Multiple Kernel Learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; AbdulJabbar, Mustafa Abdulmajeed

    2012-01-01

    Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of non-negative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation, which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.

  10. The heat kernel as the pagerank of a graph

    Science.gov (United States)

    Chung, Fan

    2007-01-01

    The concept of pagerank was first started as a way for determining the ranking of Web pages by Web search engines. Based on relations in interconnected networks, pagerank has become a major tool for addressing fundamental problems arising in general graphs, especially for large information networks with hundreds of thousands of nodes. A notable notion of pagerank, introduced by Brin and Page and denoted by PageRank, is based on random walks as a geometric sum. In this paper, we consider a notion of pagerank that is based on the (discrete) heat kernel and can be expressed as an exponential sum of random walks. The heat kernel satisfies the heat equation and can be used to analyze many useful properties of random walks in a graph. A local Cheeger inequality is established, which implies that, by focusing on cuts determined by linear orderings of vertices using the heat kernel pageranks, the resulting partition is within a quadratic factor of the optimum. This is true, even if we restrict the volume of the small part separated by the cut to be close to some specified target value. This leads to a graph partitioning algorithm for which the running time is proportional to the size of the targeted volume (instead of the size of the whole graph).

  11. Learning molecular energies using localized graph kernels

    Science.gov (United States)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  12. A shortest-path graph kernel for estimating gene product semantic similarity

    Directory of Open Access Journals (Sweden)

    Alvarez Marco A

    2011-07-01

    Full Text Available Abstract Background Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. Results We present a shortest-path graph kernel (spgk method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Conclusions Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.

  13. graphkernels: R and Python packages for graph comparison.

    Science.gov (United States)

    Sugiyama, Mahito; Ghisu, M Elisabetta; Llinares-López, Felipe; Borgwardt, Karsten

    2018-02-01

    Measuring the similarity of graphs is a fundamental step in the analysis of graph-structured data, which is omnipresent in computational biology. Graph kernels have been proposed as a powerful and efficient approach to this problem of graph comparison. Here we provide graphkernels, the first R and Python graph kernel libraries including baseline kernels such as label histogram based kernels, classic graph kernels such as random walk based kernels, and the state-of-the-art Weisfeiler-Lehman graph kernel. The core of all graph kernels is implemented in C ++ for efficiency. Using the kernel matrices computed by the package, we can easily perform tasks such as classification, regression and clustering on graph-structured samples. The R and Python packages including source code are available at https://CRAN.R-project.org/package=graphkernels and https://pypi.python.org/pypi/graphkernels. mahito@nii.ac.jp or elisabetta.ghisu@bsse.ethz.ch. Supplementary data are available online at Bioinformatics. © The Author(s) 2017. Published by Oxford University Press.

  14. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    Science.gov (United States)

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  15. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  16. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  17. What Would a Graph Look Like in this Layout? A Machine Learning Approach to Large Graph Visualization.

    Science.gov (United States)

    Kwon, Oh-Hyun; Crnovrsanin, Tarik; Ma, Kwan-Liu

    2018-01-01

    Using different methods for laying out a graph can lead to very different visual appearances, with which the viewer perceives different information. Selecting a "good" layout method is thus important for visualizing a graph. The selection can be highly subjective and dependent on the given task. A common approach to selecting a good layout is to use aesthetic criteria and visual inspection. However, fully calculating various layouts and their associated aesthetic metrics is computationally expensive. In this paper, we present a machine learning approach to large graph visualization based on computing the topological similarity of graphs using graph kernels. For a given graph, our approach can show what the graph would look like in different layouts and estimate their corresponding aesthetic metrics. An important contribution of our work is the development of a new framework to design graph kernels. Our experimental study shows that our estimation calculation is considerably faster than computing the actual layouts and their aesthetic metrics. Also, our graph kernels outperform the state-of-the-art ones in both time and accuracy. In addition, we conducted a user study to demonstrate that the topological similarity computed with our graph kernel matches perceptual similarity assessed by human users.

  18. Graph Quasicontinuous Functions and Densely Continuous Forms

    Directory of Open Access Journals (Sweden)

    Lubica Hola

    2017-07-01

    Full Text Available Let $X, Y$ be topological spaces. A function $f: X \\to Y$ is said to be graph quasicontinuous if there is a quasicontinuous function $g: X \\to Y$ with the graph of $g$ contained in the closure of the graph of $f$. There is a close relation between the notions of graph quasicontinuous functions and minimal usco maps as well as the notions of graph quasicontinuous functions and densely continuous forms. Every function with values in a compact Hausdorff space is graph quasicontinuous; more generally every locally compact function is graph quasicontinuous.

  19. Functions and graphs

    CERN Document Server

    Gelfand, I M; Shnol, E E

    1969-01-01

    The second in a series of systematic studies by a celebrated mathematician I. M. Gelfand and colleagues, this volume presents students with a well-illustrated sequence of problems and exercises designed to illuminate the properties of functions and graphs. Since readers do not have the benefit of a blackboard on which a teacher constructs a graph, the authors abandoned the customary use of diagrams in which only the final form of the graph appears; instead, the book's margins feature step-by-step diagrams for the complete construction of each graph. The first part of the book employs simple fu

  20. Asymptote Misconception on Graphing Functions: Does Graphing Software Resolve It?

    Directory of Open Access Journals (Sweden)

    Mehmet Fatih Öçal

    2017-01-01

    Full Text Available Graphing function is an important issue in mathematics education due to its use in various areas of mathematics and its potential roles for students to enhance learning mathematics. The use of some graphing software assists students’ learning during graphing functions. However, the display of graphs of functions that students sketched by hand may be relatively different when compared to the correct forms sketched using graphing software. The possible misleading effects of this situation brought a discussion of a misconception (asymptote misconception on graphing functions. The purpose of this study is two- fold. First of all, this study investigated whether using graphing software (GeoGebra in this case helps students to determine and resolve this misconception in calculus classrooms. Second, the reasons for this misconception are sought. The multiple case study was utilized in this study. University students in two calculus classrooms who received instructions with (35 students or without GeoGebra assisted instructions (32 students were compared according to whether they fell into this misconception on graphing basic functions (1/x, lnx, ex. In addition, students were interviewed to reveal the reasons behind this misconception. Data were analyzed by means of descriptive and content analysis methods. The findings indicated that those who received GeoGebra assisted instruction were better in resolving it. In addition, the reasons behind this misconception were found to be teacher-based, exam-based and some other factors.

  1. RATGRAPH: Computer Graphing of Rational Functions.

    Science.gov (United States)

    Minch, Bradley A.

    1987-01-01

    Presents an easy-to-use Applesoft BASIC program that graphs rational functions and any asymptotes that the functions might have. Discusses the nature of rational functions, graphing them manually, employing a computer to graph rational functions, and describes how the program works. (TW)

  2. Proving relations between modular graph functions

    International Nuclear Information System (INIS)

    Basu, Anirban

    2016-01-01

    We consider modular graph functions that arise in the low energy expansion of the four graviton amplitude in type II string theory. The vertices of these graphs are the positions of insertions of vertex operators on the toroidal worldsheet, while the links are the scalar Green functions connecting the vertices. Graphs with four and five links satisfy several non-trivial relations, which have been proved recently. We prove these relations by using elementary properties of Green functions and the details of the graphs. We also prove a relation between modular graph functions with six links. (paper)

  3. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  4. Feature selection and multi-kernel learning for sparse representation on a manifold

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.

  5. Feature selection and multi-kernel learning for sparse representation on a manifold.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Kernel regression with functional response

    OpenAIRE

    Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe

    2011-01-01

    We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.

  7. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  8. Mathematical Minute: Rotating a Function Graph

    Science.gov (United States)

    Bravo, Daniel; Fera, Joseph

    2013-01-01

    Using calculus only, we find the angles you can rotate the graph of a differentiable function about the origin and still obtain a function graph. We then apply the solution to odd and even degree polynomials.

  9. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  10. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  11. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  12. Relating zeta functions of discrete and quantum graphs

    Science.gov (United States)

    Harrison, Jonathan; Weyand, Tracy

    2018-02-01

    We write the spectral zeta function of the Laplace operator on an equilateral metric graph in terms of the spectral zeta function of the normalized Laplace operator on the corresponding discrete graph. To do this, we apply a relation between the spectrum of the Laplacian on a discrete graph and that of the Laplacian on an equilateral metric graph. As a by-product, we determine how the multiplicity of eigenvalues of the quantum graph, that are also in the spectrum of the graph with Dirichlet conditions at the vertices, depends on the graph geometry. Finally we apply the result to calculate the vacuum energy and spectral determinant of a complete bipartite graph and compare our results with those for a star graph, a graph in which all vertices are connected to a central vertex by a single edge.

  13. Kuramoto model for infinite graphs with kernels

    KAUST Repository

    Canale, Eduardo

    2015-01-07

    In this paper we study the Kuramoto model of weakly coupled oscillators for the case of non trivial network with large number of nodes. We approximate of such configurations by a McKean-Vlasov stochastic differential equation based on infinite graph. We focus on circulant graphs which have enough symmetries to make the computations easier. We then focus on the asymptotic regime where an integro-partial differential equation is derived. Numerical analysis and convergence proofs of the Fokker-Planck-Kolmogorov equation are conducted. Finally, we provide numerical examples that illustrate the convergence of our method.

  14. Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks.

    Science.gov (United States)

    Oh, S June; Joung, Je-Gun; Chang, Jeong-Ho; Zhang, Byoung-Tak

    2006-06-06

    To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway structures using meta-level information rather than sequence

  15. Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks

    Directory of Open Access Journals (Sweden)

    Chang Jeong-Ho

    2006-06-01

    Full Text Available Abstract Background To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. Results To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. Conclusion By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway

  16. Mapping Between Semantic Graphs and Sentences in Grammar Induction System

    Directory of Open Access Journals (Sweden)

    Laszlo Kovacs

    2010-06-01

    Full Text Available The proposed transformation module performs mapping be-
    tween two di®erent knowledge representation forms used in grammar induction systems. The kernel knowledge representation form is a special predicate centered conceptual graph called ECG. The ECG provides a semantic-based, language independent description of the environment. The other base representation form is some kind of language. The sentences of the language should meet the corresponding grammatical rules. The pilot project demonstrates the functionality of a translator module using this transformation engine between the ECG graph and the Hungarian language.

  17. Compactly Supported Basis Functions as Support Vector Kernels for Classification.

    Science.gov (United States)

    Wittek, Peter; Tan, Chew Lim

    2011-10-01

    Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

  18. Minimal Function Graphs are not Instrumented

    DEFF Research Database (Denmark)

    Mycroft, Alan; Rosendahl, Mads

    1992-01-01

    The minimal function graph semantics of Jones and Mycroft is a standard denotational semantics modified to include only `reachable' parts of a program. We show that it may be expressed directly in terms of the standard semantics without the need for instrumentation at the expression level and......, in doing so, bring out a connection with strictness. This also makes it possible to prove a stronger theorem of correctness for the minimal function graph semantics....

  19. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  20. Multiple kernel learning using single stage function approximation for binary classification problems

    Science.gov (United States)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  1. Isotropic covariance functions on graphs and their edges

    DEFF Research Database (Denmark)

    Anderes, E.; Møller, Jesper; Rasmussen, Jakob Gulddahl

    We develop parametric classes of covariance functions on linear networks and their extension to graphs with Euclidean edges, i.e., graphs with edges viewed as line segments or more general sets with a coordinate system allowing us to consider points on the graph which are vertices or points...... on an edge. Our covariance functions are defined on the vertices and edge points of these graphs and are isotropic in the sense that they depend only on the geodesic distance or on a new metric called the resistance metric (which extends the classical resistance metric developed in electrical network theory...... functions in the spatial statistics literature (the power exponential, Matérn, generalized Cauchy, and Dagum classes) are shown to be valid with respect to the resistance metric for any graph with Euclidean edges, whilst they are only valid with respect to the geodesic metric in more special cases....

  2. Test bank for precalculus functions & graphs

    CERN Document Server

    Kolman, Bernard; Levitan, Michael L

    1984-01-01

    Test Bank for Precalculus: Functions & Graphs is a supplementary material for the text, Precalculus: Functions & Graphs. The book is intended for use by mathematics teachers.The book contains standard tests for each chapter in the textbook. Each set of test focuses on gauging the level of knowledge the student has achieved during the course. The answers for each chapter test and the final exam are found at the end of the book.Mathematics teachers teaching calculus will find the book extremely useful.

  3. Higher-Order Minimal Functional Graphs

    DEFF Research Database (Denmark)

    Jones, Neil D; Rosendahl, Mads

    1994-01-01

    We present a minimal function graph semantics for a higher-order functional language with applicative evaluation order. The semantics captures the intermediate calls performed during the evaluation of a program. This information may be used in abstract interpretation as a basis for proving...

  4. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  5. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  6. Soft Sensing of Key State Variables in Fermentation Process Based on Relevance Vector Machine with Hybrid Kernel Function

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available To resolve the online detection difficulty of some important state variables in fermentation process with traditional instruments, a soft sensing modeling method based on relevance vector machine (RVM with a hybrid kernel function is presented. Based on the characteristic analysis of two commonly-used kernel functions, that is, local Gaussian kernel function and global polynomial kernel function, a hybrid kernel function combing merits of Gaussian kernel function and polynomial kernel function is constructed. To design optimal parameters of this kernel function, the particle swarm optimization (PSO algorithm is applied. The proposed modeling method is used to predict the value of cell concentration in the Lysine fermentation process. Simulation results show that the presented hybrid-kernel RVM model has a better accuracy and performance than the single kernel RVM model.

  7. Kuramoto model for infinite graphs with kernels

    KAUST Repository

    Canale, Eduardo; Tembine, Hamidou; Tempone, Raul; Zouraris, Georgios E.

    2015-01-01

    . We focus on circulant graphs which have enough symmetries to make the computations easier. We then focus on the asymptotic regime where an integro-partial differential equation is derived. Numerical analysis and convergence proofs of the Fokker

  8. A fully-automated multiscale kernel graph cuts based particle localization scheme for temporal focusing two-photon microscopy

    Science.gov (United States)

    Huang, Xia; Li, Chunqiang; Xiao, Chuan; Sun, Wenqing; Qian, Wei

    2017-03-01

    The temporal focusing two-photon microscope (TFM) is developed to perform depth resolved wide field fluorescence imaging by capturing frames sequentially. However, due to strong nonignorable noises and diffraction rings surrounding particles, further researches are extremely formidable without a precise particle localization technique. In this paper, we developed a fully-automated scheme to locate particles positions with high noise tolerance. Our scheme includes the following procedures: noise reduction using a hybrid Kalman filter method, particle segmentation based on a multiscale kernel graph cuts global and local segmentation algorithm, and a kinematic estimation based particle tracking method. Both isolated and partial-overlapped particles can be accurately identified with removal of unrelated pixels. Based on our quantitative analysis, 96.22% isolated particles and 84.19% partial-overlapped particles were successfully detected.

  9. Graph approach to the gradient expansion of density functionals

    International Nuclear Information System (INIS)

    Kozlowski, P.M.; Nalewajski, R.F.

    1986-01-01

    A graph representation of terms in the gradient expansion of the kinetic energy density functional is presented. They briefly discuss the implications of the virial theorem for the graph structure and relations between possible graphs at a given order of expansion

  10. Generic primal-dual interior point methods based on a new kernel function

    NARCIS (Netherlands)

    EL Ghami, M.; Roos, C.

    2008-01-01

    In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the

  11. Heat kernels and zeta functions on fractals

    International Nuclear Information System (INIS)

    Dunne, Gerald V

    2012-01-01

    On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)

  12. A Walk-based Semantically Enriched Tree Kernel Over Distributed Word Representations

    DEFF Research Database (Denmark)

    Srivastava, Shashank; Hovy, Dirk

    2013-01-01

    We propose a walk-based graph kernel that generalizes the notion of tree-kernels to continuous spaces. Our proposed approach subsumes a general framework for word-similarity, and in particular, provides a flexible way to incorporate distributed representations. Using vector representations......, such an approach captures both distributional semantic similarities among words as well as the structural relations between them (encoded as the structure of the parse tree). We show an efficient formulation to compute this kernel using simple matrix multiplication operations. We present our results on three...

  13. Graphs with not all possible path-kernels

    DEFF Research Database (Denmark)

    Aldred, Robert; Thomassen, Carsten

    2004-01-01

    The Path Partition Conjecture states that the vertices of a graph G with longest path of length c may be partitioned into two parts X and Y such that the longest path in the subgraph of G induced by X has length at most a and the longest path in the subgraph of G induced by Y has length at most b...

  14. A graph kernel approach for alignment-free domain-peptide interaction prediction with an application to human SH3 domains.

    Science.gov (United States)

    Kundu, Kousik; Costa, Fabrizio; Backofen, Rolf

    2013-07-01

    State-of-the-art experimental data for determining binding specificities of peptide recognition modules (PRMs) is obtained by high-throughput approaches like peptide arrays. Most prediction tools applicable to this kind of data are based on an initial multiple alignment of the peptide ligands. Building an initial alignment can be error-prone, especially in the case of the proline-rich peptides bound by the SH3 domains. Here, we present a machine-learning approach based on an efficient graph-kernel technique to predict the specificity of a large set of 70 human SH3 domains, which are an important class of PRMs. The graph-kernel strategy allows us to (i) integrate several types of physico-chemical information for each amino acid, (ii) consider high-order correlations between these features and (iii) eliminate the need for an initial peptide alignment. We build specialized models for each human SH3 domain and achieve competitive predictive performance of 0.73 area under precision-recall curve, compared with 0.27 area under precision-recall curve for state-of-the-art methods based on position weight matrices. We show that better models can be obtained when we use information on the noninteracting peptides (negative examples), which is currently not used by the state-of-the art approaches based on position weight matrices. To this end, we analyze two strategies to identify subsets of high confidence negative data. The techniques introduced here are more general and hence can also be used for any other protein domains, which interact with short peptides (i.e. other PRMs). The program with the predictive models can be found at http://www.bioinf.uni-freiburg.de/Software/SH3PepInt/SH3PepInt.tar.gz. We also provide a genome-wide prediction for all 70 human SH3 domains, which can be found under http://www.bioinf.uni-freiburg.de/Software/SH3PepInt/Genome-Wide-Predictions.tar.gz. Supplementary data are available at Bioinformatics online.

  15. A graph kernel approach for alignment-free domain–peptide interaction prediction with an application to human SH3 domains

    Science.gov (United States)

    Kundu, Kousik; Costa, Fabrizio; Backofen, Rolf

    2013-01-01

    Motivation: State-of-the-art experimental data for determining binding specificities of peptide recognition modules (PRMs) is obtained by high-throughput approaches like peptide arrays. Most prediction tools applicable to this kind of data are based on an initial multiple alignment of the peptide ligands. Building an initial alignment can be error-prone, especially in the case of the proline-rich peptides bound by the SH3 domains. Results: Here, we present a machine-learning approach based on an efficient graph-kernel technique to predict the specificity of a large set of 70 human SH3 domains, which are an important class of PRMs. The graph-kernel strategy allows us to (i) integrate several types of physico-chemical information for each amino acid, (ii) consider high-order correlations between these features and (iii) eliminate the need for an initial peptide alignment. We build specialized models for each human SH3 domain and achieve competitive predictive performance of 0.73 area under precision-recall curve, compared with 0.27 area under precision-recall curve for state-of-the-art methods based on position weight matrices. We show that better models can be obtained when we use information on the noninteracting peptides (negative examples), which is currently not used by the state-of-the art approaches based on position weight matrices. To this end, we analyze two strategies to identify subsets of high confidence negative data. The techniques introduced here are more general and hence can also be used for any other protein domains, which interact with short peptides (i.e. other PRMs). Availability: The program with the predictive models can be found at http://www.bioinf.uni-freiburg.de/Software/SH3PepInt/SH3PepInt.tar.gz. We also provide a genome-wide prediction for all 70 human SH3 domains, which can be found under http://www.bioinf.uni-freiburg.de/Software/SH3PepInt/Genome-Wide-Predictions.tar.gz. Contact: backofen@informatik.uni-freiburg.de Supplementary

  16. Higher-order predictions for splitting functions and coefficient functions from physical evolution kernels

    International Nuclear Information System (INIS)

    Vogt, A; Soar, G.; Vermaseren, J.A.M.

    2010-01-01

    We have studied the physical evolution kernels for nine non-singlet observables in deep-inelastic scattering (DIS), semi-inclusive e + e - annihilation and the Drell-Yan (DY) process, and for the flavour-singlet case of the photon- and heavy-top Higgs-exchange structure functions (F 2 , F φ ) in DIS. All known contributions to these kernels show an only single-logarithmic large-x enhancement at all powers of (1-x). Conjecturing that this behaviour persists to (all) higher orders, we have predicted the highest three (DY: two) double logarithms of the higher-order non-singlet coefficient functions and of the four-loop singlet splitting functions. The coefficient-function predictions can be written as exponentiations of 1/N-suppressed contributions in Mellin-N space which, however, are less predictive than the well-known exponentiation of the ln k N terms. (orig.)

  17. AptRank: an adaptive PageRank model for protein function prediction on   bi-relational graphs.

    Science.gov (United States)

    Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael

    2017-06-15

    Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  18. A Qualitative Approach to Sketch the Graph of a Function.

    Science.gov (United States)

    Alson, Pedro

    1992-01-01

    Presents a qualitative and global method of graphing functions that involves transformations of the graph of a known function in the cartesian coordinate system referred to as graphic operators. Explains how the method has been taught to students and some comments about the results obtained. (MDH)

  19. Functionals of Brownian motion, localization and metric graphs

    International Nuclear Information System (INIS)

    Comtet, Alain; Desbois, Jean; Texier, Christophe

    2005-01-01

    We review several results related to the problem of a quantum particle in a random environment. In an introductory part, we recall how several functionals of Brownian motion arise in the study of electronic transport in weakly disordered metals (weak localization). Two aspects of the physics of the one-dimensional strong localization are reviewed: some properties of the scattering by a random potential (time delay distribution) and a study of the spectrum of a random potential on a bounded domain (the extreme value statistics of the eigenvalues). Then we mention several results concerning the diffusion on graphs, and more generally the spectral properties of the Schroedinger operator on graphs. The interest of spectral determinants as generating functions characterizing the diffusion on graphs is illustrated. Finally, we consider a two-dimensional model of a charged particle coupled to the random magnetic field due to magnetic vortices. We recall the connection between spectral properties of this model and winding functionals of planar Brownian motion. (topical review)

  20. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  1. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  2. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Mahayni, Malek A.

    2011-07-01

    Finding optimal paths in directed graphs is a wide area of research that has received much of attention in theoretical computer science due to its importance in many applications (e.g., computer networks and road maps). Many algorithms have been developed to solve the optimal paths problem with different kinds of graphs. An algorithm that solves the problem of paths’ optimization in directed graphs relative to different cost functions is described in [1]. It follows an approach extended from the dynamic programming approach as it solves the problem sequentially and works on directed graphs with positive weights and no loop edges. The aim of this thesis is to implement and evaluate that algorithm to find the optimal paths in directed graphs relative to two different cost functions ( , ). A possible interpretation of a directed graph is a network of roads so the weights for the function represent the length of roads, whereas the weights for the function represent a constraint of the width or weight of a vehicle. The optimization aim for those two functions is to minimize the cost relative to the function and maximize the constraint value associated with the function. This thesis also includes finding and proving the relation between the two different cost functions ( , ). When given a value of one function, we can find the best possible value for the other function. This relation is proven theoretically and also implemented and experimented using Matlab®[2].

  3. Graphs in kinematics—a need for adherence to principles of algebraic functions

    Science.gov (United States)

    Sokolowski, Andrzej

    2017-11-01

    Graphs in physics are central to the analysis of phenomena and to learning about a system’s behavior. The ways students handle graphs are frequently researched. Students’ misconceptions are highlighted, and methods of improvement suggested. While kinematics graphs are to represent a real motion, they are also algebraic entities that must satisfy conditions for being algebraic functions. To be algebraic functions, they must pass certain tests before they can be used to infer more about motion. A preliminary survey of some physics resources has revealed that little attention is paid to verifying if the position, velocity and acceleration versus time graphs, that are to depict real motion, satisfy the most critical condition for being an algebraic function; the vertical line test. The lack of attention to this adherence shows as vertical segments in piecewise graphs. Such graphs generate unrealistic interpretations and may confuse students. A group of 25 college physics students was provided with such a graph and asked to analyse its adherence to reality. The majority of the students (N  =  16, 64%) questioned the graph’s validity. It is inferred that such graphs might not only jeopardize the function principles studied in mathematics but also undermine the purpose of studying these principles. The aim of this study was to bring this idea forth and suggest a better alignment of physics and mathematics methods.

  4. Methods of filtering the graph images of the functions

    Directory of Open Access Journals (Sweden)

    Олександр Григорович Бурса

    2017-06-01

    Full Text Available The theoretical aspects of cleaning raster images of scanned graphs of functions from digital, chromatic and luminance distortions by using computer graphics techniques have been considered. The basic types of distortions characteristic of graph images of functions have been stated. To suppress the distortion several methods, providing for high-quality of the resulting images and saving their topological features, were suggested. The paper describes the techniques developed and improved by the authors: the method of cleaning the image of distortions by means of iterative contrasting, based on the step-by-step increase in image contrast in the graph by 1%; the method of small entities distortion restoring, based on the thinning of the known matrix of contrast increase filter (the allowable dimensions of the nucleus dilution radius convolution matrix, which provide for the retention of the graph lines have been established; integration technique of the noise reduction method by means of contrasting and distortion restoring method of small entities with known σ-filter. Each method in the complex has been theoretically substantiated. The developed methods involve treatment of graph images as the entire image (global processing and its fragments (local processing. The metrics assessing the quality of the resulting image with the global and local processing have been chosen, the substantiation of the choice as well as the formulas have been given. The proposed complex methods of cleaning the graphs images of functions from grayscale image distortions is adaptive to the form of an image carrier, the distortion level in the image and its distribution. The presented results of testing the developed complex of methods for a representative sample of images confirm its effectiveness

  5. Signal Processing for Time-Series Functions on a Graph

    Science.gov (United States)

    2018-02-01

    Figures Fig. 1 Time -series function on a fixed graph.............................................2 iv Approved for public release; distribution is...φi〉`2(V)φi (39) 6= f̄ (40) Instead, we simply recover the average of f over time . 13 Approved for public release; distribution is unlimited. This...ARL-TR-8276• FEB 2018 US Army Research Laboratory Signal Processing for Time -Series Functions on a Graph by Humberto Muñoz-Barona, Jean Vettel, and

  6. Scaling up graph-based semisupervised learning via prototype vector machines.

    Science.gov (United States)

    Zhang, Kai; Lan, Liang; Kwok, James T; Vucetic, Slobodan; Parvin, Bahram

    2015-03-01

    When the amount of labeled data are limited, semisupervised learning can improve the learner's performance by also using the often easily available unlabeled data. In particular, a popular approach requires the learned function to be smooth on the underlying data manifold. By approximating this manifold as a weighted graph, such graph-based techniques can often achieve state-of-the-art performance. However, their high time and space complexities make them less attractive on large data sets. In this paper, we propose to scale up graph-based semisupervised learning using a set of sparse prototypes derived from the data. These prototypes serve as a small set of data representatives, which can be used to approximate the graph-based regularizer and to control model complexity. Consequently, both training and testing become much more efficient. Moreover, when the Gaussian kernel is used to define the graph affinity, a simple and principled method to select the prototypes can be obtained. Experiments on a number of real-world data sets demonstrate encouraging performance and scaling properties of the proposed approach. It also compares favorably with models learned via l1 -regularization at the same level of model sparsity. These results demonstrate the efficacy of the proposed approach in producing highly parsimonious and accurate models for semisupervised learning.

  7. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  8. An Xdata Architecture for Federated Graph Models and Multi-tier Asymmetric Computing

    Science.gov (United States)

    2014-01-01

    Wikipedia, a scale-free random graph (kron), Akamai trace route data, Bitcoin transaction data, and a Twitter follower network. We present results for...3x (SSSP on a random graph) and nearly 300x (Akamai and Bitcoin ) over the CPU performance of a well-known and widely deployed CPU-based graph...provided better throughput for smaller frontiers such as roadmaps or the Bitcoin data set. In our work, we have focused on two-phase kernels, but it

  9. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  10. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  11. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    Science.gov (United States)

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  12. Free energy on a cycle graph and trigonometric deformation of heat kernel traces on odd spheres

    Science.gov (United States)

    Kan, Nahomi; Shiraishi, Kiyoshi

    2018-01-01

    We consider a possible ‘deformation’ of the trace of the heat kernel on odd dimensional spheres, motivated by the calculation of the free energy of a scalar field on a discretized circle. By using an expansion in terms of the modified Bessel functions, we obtain the values of the free energies after a suitable regularization.

  13. Random geometric graphs with general connection functions

    Science.gov (United States)

    Dettmann, Carl P.; Georgiou, Orestis

    2016-03-01

    In the original (1961) Gilbert model of random geometric graphs, nodes are placed according to a Poisson point process, and links formed between those within a fixed range. Motivated by wireless ad hoc networks "soft" or "probabilistic" connection models have recently been introduced, involving a "connection function" H (r ) that gives the probability that two nodes at distance r are linked (directly connect). In many applications (not only wireless networks), it is desirable that the graph is connected; that is, every node is linked to every other node in a multihop fashion. Here the connection probability of a dense network in a convex domain in two or three dimensions is expressed in terms of contributions from boundary components for a very general class of connection functions. It turns out that only a few quantities such as moments of the connection function appear. Good agreement is found with special cases from previous studies and with numerical simulations.

  14. The global kernel k-means algorithm for clustering in feature space.

    Science.gov (United States)

    Tzortzis, Grigorios F; Likas, Aristidis C

    2009-07-01

    Kernel k-means is an extension of the standard k -means clustering algorithm that identifies nonlinearly separable clusters. In order to overcome the cluster initialization problem associated with this method, we propose the global kernel k-means algorithm, a deterministic and incremental approach to kernel-based clustering. Our method adds one cluster at each stage, through a global search procedure consisting of several executions of kernel k-means from suitable initializations. This algorithm does not depend on cluster initialization, identifies nonlinearly separable clusters, and, due to its incremental nature and search procedure, locates near-optimal solutions avoiding poor local minima. Furthermore, two modifications are developed to reduce the computational cost that do not significantly affect the solution quality. The proposed methods are extended to handle weighted data points, which enables their application to graph partitioning. We experiment with several data sets and the proposed approach compares favorably to kernel k -means with random restarts.

  15. Partition function expansion on region graphs and message-passing equations

    International Nuclear Information System (INIS)

    Zhou, Haijun; Wang, Chuang; Xiao, Jing-Qing; Bi, Zedong

    2011-01-01

    Disordered and frustrated graphical systems are ubiquitous in physics, biology, and information science. For models on complete graphs or random graphs, deep understanding has been achieved through the mean-field replica and cavity methods. But finite-dimensional 'real' systems remain very challenging because of the abundance of short loops and strong local correlations. A statistical mechanics theory is constructed in this paper for finite-dimensional models based on the mathematical framework of the partition function expansion and the concept of region graphs. Rigorous expressions for the free energy and grand free energy are derived. Message-passing equations on the region graph, such as belief propagation and survey propagation, are also derived rigorously. (letter)

  16. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  17. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; Huang, Jianhua Z.; Sun, Yijun; Gao, Xin

    2014-01-01

    by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant

  18. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    Science.gov (United States)

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  19. Reproducibility of graph metrics of human brain functional networks.

    Science.gov (United States)

    Deuker, Lorena; Bullmore, Edward T; Smith, Marie; Christensen, Soren; Nathan, Pradeep J; Rockstroh, Brigitte; Bassett, Danielle S

    2009-10-01

    Graph theory provides many metrics of complex network organization that can be applied to analysis of brain networks derived from neuroimaging data. Here we investigated the test-retest reliability of graph metrics of functional networks derived from magnetoencephalography (MEG) data recorded in two sessions from 16 healthy volunteers who were studied at rest and during performance of the n-back working memory task in each session. For each subject's data at each session, we used a wavelet filter to estimate the mutual information (MI) between each pair of MEG sensors in each of the classical frequency intervals from gamma to low delta in the overall range 1-60 Hz. Undirected binary graphs were generated by thresholding the MI matrix and 8 global network metrics were estimated: the clustering coefficient, path length, small-worldness, efficiency, cost-efficiency, assortativity, hierarchy, and synchronizability. Reliability of each graph metric was assessed using the intraclass correlation (ICC). Good reliability was demonstrated for most metrics applied to the n-back data (mean ICC=0.62). Reliability was greater for metrics in lower frequency networks. Higher frequency gamma- and beta-band networks were less reliable at a global level but demonstrated high reliability of nodal metrics in frontal and parietal regions. Performance of the n-back task was associated with greater reliability than measurements on resting state data. Task practice was also associated with greater reliability. Collectively these results suggest that graph metrics are sufficiently reliable to be considered for future longitudinal studies of functional brain network changes.

  20. Wave functions, evolution equations and evolution kernels form light-ray operators of QCD

    International Nuclear Information System (INIS)

    Mueller, D.; Robaschik, D.; Geyer, B.; Dittes, F.M.; Horejsi, J.

    1994-01-01

    The widely used nonperturbative wave functions and distribution functions of QCD are determined as matrix elements of light-ray operators. These operators appear as large momentum limit of non-local hardron operators or as summed up local operators in light-cone expansions. Nonforward one-particle matrix elements of such operators lead to new distribution amplitudes describing both hadrons simultaneously. These distribution functions depend besides other variables on two scaling variables. They are applied for the description of exclusive virtual Compton scattering in the Bjorken region near forward direction and the two meson production process. The evolution equations for these distribution amplitudes are derived on the basis of the renormalization group equation of the considered operators. This includes that also the evolution kernels follow from the anomalous dimensions of these operators. Relations between different evolution kernels (especially the Altarelli-Parisi and the Brodsky-Lepage kernels) are derived and explicitly checked for the existing two-loop calculations of QCD. Technical basis of these resluts are support and analytically properties of the anomalous dimensions of light-ray operators obtained with the help of the α-representation of Green's functions. (orig.)

  1. Compilation of functional languages using flow graph analysis

    NARCIS (Netherlands)

    Hartel, Pieter H.; Glaser, Hugh; Wild, John M.

    A system based on the notion of a flow graph is used to specify formally and to implement a compiler for a lazy functional language. The compiler takes a simple functional language as input and generates C. The generated C program can then be compiled, and loaded with an extensive run-time system to

  2. A tree-decomposed transfer matrix for computing exact Potts model partition functions for arbitrary graphs, with applications to planar graph colourings

    International Nuclear Information System (INIS)

    Bedini, Andrea; Jacobsen, Jesper Lykke

    2010-01-01

    Combining tree decomposition and transfer matrix techniques provides a very general algorithm for computing exact partition functions of statistical models defined on arbitrary graphs. The algorithm is particularly efficient in the case of planar graphs. We illustrate it by computing the Potts model partition functions and chromatic polynomials (the number of proper vertex colourings using Q colours) for large samples of random planar graphs with up to N = 100 vertices. In the latter case, our algorithm yields a sub-exponential average running time of ∼ exp(1.516√N), a substantial improvement over the exponential running time ∼exp (0.245N) provided by the hitherto best-known algorithm. We study the statistics of chromatic roots of random planar graphs in some detail, comparing the findings with results for finite pieces of a regular lattice.

  3. Fracture and Fragmentation of Simplicial Finite Elements Meshes using Graphs

    Energy Technology Data Exchange (ETDEWEB)

    Mota, A; Knap, J; Ortiz, M

    2006-10-18

    An approach for the topological representation of simplicial finite element meshes as graphs is presented. It is shown that by using a graph, the topological changes induced by fracture reduce to a few, local kernel operations. The performance of the graph representation is demonstrated and analyzed, using as reference the 3D fracture algorithm by Pandolfi and Ortiz [22]. It is shown that the graph representation initializes in O(N{sub E}{sup 1.1}) time and fractures in O(N{sub I}{sup 1.0}) time, while the reference implementation requires O(N{sub E}{sup 2.1}) time to initialize and O(N{sub I}{sup 1.9}) time to fracture, where NE is the number of elements in the mesh and N{sub I} is the number of interfaces to fracture.

  4. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  5. Graph Theory Roots of Spatial Operators for Kinematics and Dynamics

    Science.gov (United States)

    Jain, Abhinandan

    2011-01-01

    Spatial operators have been used to analyze the dynamics of robotic multibody systems and to develop novel computational dynamics algorithms. Mass matrix factorization, inversion, diagonalization, and linearization are among several new insights obtained using such operators. While initially developed for serial rigid body manipulators, the spatial operators and the related mathematical analysis have been shown to extend very broadly including to tree and closed topology systems, to systems with flexible joints, links, etc. This work uses concepts from graph theory to explore the mathematical foundations of spatial operators. The goal is to study and characterize the properties of the spatial operators at an abstract level so that they can be applied to a broader range of dynamics problems. The rich mathematical properties of the kinematics and dynamics of robotic multibody systems has been an area of strong research interest for several decades. These properties are important to understand the inherent physical behavior of systems, for stability and control analysis, for the development of computational algorithms, and for model development of faithful models. Recurring patterns in spatial operators leads one to ask the more abstract question about the properties and characteristics of spatial operators that make them so broadly applicable. The idea is to step back from the specific application systems, and understand more deeply the generic requirements and properties of spatial operators, so that the insights and techniques are readily available across different kinematics and dynamics problems. In this work, techniques from graph theory were used to explore the abstract basis for the spatial operators. The close relationship between the mathematical properties of adjacency matrices for graphs and those of spatial operators and their kernels were established. The connections hold across very basic requirements on the system topology, the nature of the component

  6. Scatter kernel estimation with an edge-spread function method for cone-beam computed tomography imaging

    International Nuclear Information System (INIS)

    Li Heng; Mohan, Radhe; Zhu, X Ronald

    2008-01-01

    The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.

  7. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Mahayni, Malek A.

    2011-01-01

    developed to solve the optimal paths problem with different kinds of graphs. An algorithm that solves the problem of paths’ optimization in directed graphs relative to different cost functions is described in [1]. It follows an approach extended from

  8. Range-separated time-dependent density-functional theory with a frequency-dependent second-order Bethe-Salpeter correlation kernel

    Energy Technology Data Exchange (ETDEWEB)

    Rebolini, Elisa, E-mail: elisa.rebolini@kjemi.uio.no; Toulouse, Julien, E-mail: julien.toulouse@upmc.fr [Laboratoire de Chimie Théorique, Sorbonne Universités, UPMC Univ Paris 06, CNRS, 4 place Jussieu, F-75005 Paris (France)

    2016-03-07

    We present a range-separated linear-response time-dependent density-functional theory (TDDFT) which combines a density-functional approximation for the short-range response kernel and a frequency-dependent second-order Bethe-Salpeter approximation for the long-range response kernel. This approach goes beyond the adiabatic approximation usually used in linear-response TDDFT and aims at improving the accuracy of calculations of electronic excitation energies of molecular systems. A detailed derivation of the frequency-dependent second-order Bethe-Salpeter correlation kernel is given using many-body Green-function theory. Preliminary tests of this range-separated TDDFT method are presented for the calculation of excitation energies of the He and Be atoms and small molecules (H{sub 2}, N{sub 2}, CO{sub 2}, H{sub 2}CO, and C{sub 2}H{sub 4}). The results suggest that the addition of the long-range second-order Bethe-Salpeter correlation kernel overall slightly improves the excitation energies.

  9. Function spaces with uniform, fine and graph topologies

    CERN Document Server

    McCoy, Robert A; Jindal, Varun

    2018-01-01

    This book presents a comprehensive account of the theory of spaces of continuous functions under uniform, fine and graph topologies. Besides giving full details of known results, an attempt is made to give generalizations wherever possible, enriching the existing literature. The goal of this monograph is to provide an extensive study of the uniform, fine and graph topologies on the space C(X,Y) of all continuous functions from a Tychonoff space X to a metric space (Y,d); and the uniform and fine topologies on the space H(X) of all self-homeomorphisms on a metric space (X,d). The subject matter of this monograph is significant from the theoretical viewpoint, but also has applications in areas such as analysis, approximation theory and differential topology. Written in an accessible style, this book will be of interest to researchers as well as graduate students in this vibrant research area.

  10. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  11. Using Cochran's Z Statistic to Test the Kernel-Smoothed Item Response Function Differences between Focal and Reference Groups

    Science.gov (United States)

    Zheng, Yinggan; Gierl, Mark J.; Cui, Ying

    2010-01-01

    This study combined the kernel smoothing procedure and a nonparametric differential item functioning statistic--Cochran's Z--to statistically test the difference between the kernel-smoothed item response functions for reference and focal groups. Simulation studies were conducted to investigate the Type I error and power of the proposed…

  12. Discrimination Power of Polynomial-Based Descriptors for Graphs by Using Functional Matrices.

    Science.gov (United States)

    Dehmer, Matthias; Emmert-Streib, Frank; Shi, Yongtang; Stefu, Monica; Tripathi, Shailesh

    2015-01-01

    In this paper, we study the discrimination power of graph measures that are based on graph-theoretical matrices. The paper generalizes the work of [M. Dehmer, M. Moosbrugger. Y. Shi, Encoding structural information uniquely with polynomial-based descriptors by employing the Randić matrix, Applied Mathematics and Computation, 268(2015), 164-168]. We demonstrate that by using the new functional matrix approach, exhaustively generated graphs can be discriminated more uniquely than shown in the mentioned previous work.

  13. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  14. Discriminating between brain rest and attention states using fMRI connectivity graphs and subtree SVM

    Science.gov (United States)

    Mokhtari, Fatemeh; Bakhtiari, Shahab K.; Hossein-Zadeh, Gholam Ali; Soltanian-Zadeh, Hamid

    2012-02-01

    Decoding techniques have opened new windows to explore the brain function and information encoding in brain activity. In the current study, we design a recursive support vector machine which is enriched by a subtree graph kernel. We apply the classifier to discriminate between attentional cueing task and resting state from a block design fMRI dataset. The classifier is trained using weighted fMRI graphs constructed from activated regions during the two mentioned states. The proposed method leads to classification accuracy of 1. It is also able to elicit discriminative regions and connectivities between the two states using a backward edge elimination algorithm. This algorithm shows the importance of regions including cerebellum, insula, left middle superior frontal gyrus, post cingulate cortex, and connectivities between them to enhance the correct classification rate.

  15. Transduction on Directed Graphs via Absorbing Random Walks.

    Science.gov (United States)

    De, Jaydeep; Zhang, Xiaowei; Lin, Feng; Cheng, Li

    2017-08-11

    In this paper we consider the problem of graph-based transductive classification, and we are particularly interested in the directed graph scenario which is a natural form for many real world applications.Different from existing research efforts that either only deal with undirected graphs or circumvent directionality by means of symmetrization, we propose a novel random walk approach on directed graphs using absorbing Markov chains, which can be regarded as maximizing the accumulated expected number of visits from the unlabeled transient states. Our algorithm is simple, easy to implement, and works with large-scale graphs on binary, multiclass, and multi-label prediction problems. Moreover, it is capable of preserving the graph structure even when the input graph is sparse and changes over time, as well as retaining weak signals presented in the directed edges. We present its intimate connections to a number of existing methods, including graph kernels, graph Laplacian based methods, and interestingly, spanning forest of graphs. Its computational complexity and the generalization error are also studied. Empirically our algorithm is systematically evaluated on a wide range of applications, where it has shown to perform competitively comparing to a suite of state-of-the-art methods. In particular, our algorithm is shown to work exceptionally well with large sparse directed graphs with e.g. millions of nodes and tens of millions of edges, where it significantly outperforms other state-of-the-art methods. In the dynamic graph setting involving insertion or deletion of nodes and edge-weight changes over time, it also allows efficient online updates that produce the same results as of the batch update counterparts.

  16. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  17. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  18. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    Science.gov (United States)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  19. Graph-based network analysis of resting-state functional MRI

    Directory of Open Access Journals (Sweden)

    Jinhui Wang

    2010-06-01

    Full Text Available In the past decade, resting-state functional MRI (R-fMRI measures of brain activity have attracted considerable attention. Based on changes in the blood oxygen level-dependent signal, R-fMRI offers a novel way to assess the brain’s spontaneous or intrinsic (i.e., task-free activity with both high spatial and temporal resolutions. The properties of both the intra- and inter-regional connectivity of resting-state brain activity have been well documented, promoting our understanding of the brain as a complex network. Specifically, the topological organization of brain networks has been recently studied with graph theory. In this review, we will summarize the recent advances in graph-based brain network analyses of R-fMRI signals, both in typical and atypical populations. Application of these approaches to R-fMRI data has demonstrated non-trivial topological properties of functional networks in the human brain. Among these is the knowledge that the brain’s intrinsic activity is organized as a small-world, highly efficient network, with significant modularity and highly connected hub regions. These network properties have also been found to change throughout normal development, aging and in various pathological conditions. The literature reviewed here suggests that graph-based network analyses are capable of uncovering system-level changes associated with different processes in the resting brain, which could provide novel insights into the understanding of the underlying physiological mechanisms of brain function. We also highlight several potential research topics in the future.

  20. Graph-based network analysis of resting-state functional MRI.

    Science.gov (United States)

    Wang, Jinhui; Zuo, Xinian; He, Yong

    2010-01-01

    In the past decade, resting-state functional MRI (R-fMRI) measures of brain activity have attracted considerable attention. Based on changes in the blood oxygen level-dependent signal, R-fMRI offers a novel way to assess the brain's spontaneous or intrinsic (i.e., task-free) activity with both high spatial and temporal resolutions. The properties of both the intra- and inter-regional connectivity of resting-state brain activity have been well documented, promoting our understanding of the brain as a complex network. Specifically, the topological organization of brain networks has been recently studied with graph theory. In this review, we will summarize the recent advances in graph-based brain network analyses of R-fMRI signals, both in typical and atypical populations. Application of these approaches to R-fMRI data has demonstrated non-trivial topological properties of functional networks in the human brain. Among these is the knowledge that the brain's intrinsic activity is organized as a small-world, highly efficient network, with significant modularity and highly connected hub regions. These network properties have also been found to change throughout normal development, aging, and in various pathological conditions. The literature reviewed here suggests that graph-based network analyses are capable of uncovering system-level changes associated with different processes in the resting brain, which could provide novel insights into the understanding of the underlying physiological mechanisms of brain function. We also highlight several potential research topics in the future.

  1. Mechanical system reliability analysis using a combination of graph theory and Boolean function

    International Nuclear Information System (INIS)

    Tang, J.

    2001-01-01

    A new method based on graph theory and Boolean function for assessing reliability of mechanical systems is proposed. The procedure for this approach consists of two parts. By using the graph theory, the formula for the reliability of a mechanical system that considers the interrelations of subsystems or components is generated. Use of the Boolean function to examine the failure interactions of two particular elements of the system, followed with demonstrations of how to incorporate such failure dependencies into the analysis of larger systems, a constructive algorithm for quantifying the genuine interconnections between the subsystems or components is provided. The combination of graph theory and Boolean function provides an effective way to evaluate the reliability of a large, complex mechanical system. A numerical example demonstrates that this method an effective approaches in system reliability analysis

  2. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  3. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  4. Graph 500 on OpenSHMEM: Using a Practical Survey of Past Work to Motivate Novel Algorithmic Developments

    Energy Technology Data Exchange (ETDEWEB)

    Grossman, Max [Rice Univ., Houston, TX (United States); Pritchard Jr., Howard Porter [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Budimlic, Zoran [Rice Univ., Houston, TX (United States); Sarkar, Vivek [Rice Univ., Houston, TX (United States)

    2016-12-22

    Graph500 [14] is an effort to offer a standardized benchmark across large-scale distributed platforms which captures the behavior of common communicationbound graph algorithms. Graph500 differs from other large-scale benchmarking efforts (such as HPL [6] or HPGMG [7]) primarily in the irregularity of its computation and data access patterns. The core computational kernel of Graph500 is a breadth-first search (BFS) implemented on an undirected graph. The output of Graph500 is a spanning tree of the input graph, usually represented by a predecessor mapping for every node in the graph. The Graph500 benchmark defines several pre-defined input sizes for implementers to test against. This report summarizes investigation into implementing the Graph500 benchmark on OpenSHMEM, and focuses on first building a strong and practical understanding of the strengths and limitations of past work before proposing and developing novel extensions.

  5. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  6. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  7. The H0 function, a new index for detecting structural/topological complexity information in undirected graphs

    Science.gov (United States)

    Buscema, Massimo; Asadi-Zeydabadi, Masoud; Lodwick, Weldon; Breda, Marco

    2016-04-01

    Significant applications such as the analysis of Alzheimer's disease differentiated from dementia, or in data mining of social media, or in extracting information of drug cartel structural composition, are often modeled as graphs. The structural or topological complexity or lack of it in a graph is quite often useful in understanding and more importantly, resolving the problem. We are proposing a new index we call the H0function to measure the structural/topological complexity of a graph. To do this, we introduce the concept of graph pruning and its associated algorithm that is used in the development of our measure. We illustrate the behavior of our measure, the H0 function, through different examples found in the appendix. These examples indicate that the H0 function contains information that is useful and important characteristics of a graph. Here, we restrict ourselves to undirected.

  8. Dimensionality Reduction of Hyperspectral Image with Graph-Based Discriminant Analysis Considering Spectral Similarity

    Directory of Open Access Journals (Sweden)

    Fubiao Feng

    2017-03-01

    Full Text Available Recently, graph embedding has drawn great attention for dimensionality reduction in hyperspectral imagery. For example, locality preserving projection (LPP utilizes typical Euclidean distance in a heat kernel to create an affinity matrix and projects the high-dimensional data into a lower-dimensional space. However, the Euclidean distance is not sufficiently correlated with intrinsic spectral variation of a material, which may result in inappropriate graph representation. In this work, a graph-based discriminant analysis with spectral similarity (denoted as GDA-SS measurement is proposed, which fully considers curves changing description among spectral bands. Experimental results based on real hyperspectral images demonstrate that the proposed method is superior to traditional methods, such as supervised LPP, and the state-of-the-art sparse graph-based discriminant analysis (SGDA.

  9. Counting surface-kernel epimorphisms from a co-compact Fuchsian group to a cyclic group with motivations from string theory and QFT

    Directory of Open Access Journals (Sweden)

    Khodakhast Bibak

    2016-09-01

    Full Text Available Graphs embedded into surfaces have many important applications, in particular, in combinatorics, geometry, and physics. For example, ribbon graphs and their counting is of great interest in string theory and quantum field theory (QFT. Recently, Koch et al. (2013 [12] gave a refined formula for counting ribbon graphs and discussed its applications to several physics problems. An important factor in this formula is the number of surface-kernel epimorphisms from a co-compact Fuchsian group to a cyclic group. The aim of this paper is to give an explicit and practical formula for the number of such epimorphisms. As a consequence, we obtain an ‘equivalent’ form of Harvey's famous theorem on the cyclic groups of automorphisms of compact Riemann surfaces. Our main tool is an explicit formula for the number of solutions of restricted linear congruence recently proved by Bibak et al. using properties of Ramanujan sums and of the finite Fourier transform of arithmetic functions.

  10. Counting surface-kernel epimorphisms from a co-compact Fuchsian group to a cyclic group with motivations from string theory and QFT

    Energy Technology Data Exchange (ETDEWEB)

    Bibak, Khodakhast, E-mail: kbibak@uvic.ca; Kapron, Bruce M., E-mail: bmkapron@uvic.ca; Srinivasan, Venkatesh, E-mail: srinivas@uvic.ca

    2016-09-15

    Graphs embedded into surfaces have many important applications, in particular, in combinatorics, geometry, and physics. For example, ribbon graphs and their counting is of great interest in string theory and quantum field theory (QFT). Recently, Koch et al. (2013) gave a refined formula for counting ribbon graphs and discussed its applications to several physics problems. An important factor in this formula is the number of surface-kernel epimorphisms from a co-compact Fuchsian group to a cyclic group. The aim of this paper is to give an explicit and practical formula for the number of such epimorphisms. As a consequence, we obtain an ‘equivalent’ form of Harvey's famous theorem on the cyclic groups of automorphisms of compact Riemann surfaces. Our main tool is an explicit formula for the number of solutions of restricted linear congruence recently proved by Bibak et al. using properties of Ramanujan sums and of the finite Fourier transform of arithmetic functions.

  11. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Abubeker, Jewahir Ali

    2011-05-14

    This paper is devoted to the consideration of an algorithm for sequential optimization of paths in directed graphs relative to di_erent cost functions. The considered algorithm is based on an extension of dynamic programming which allows to represent the initial set of paths and the set of optimal paths after each application of optimization procedure in the form of a directed acyclic graph.

  12. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  13. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  14. Quantum vacuum energy in graphs and billiards

    International Nuclear Information System (INIS)

    Kaplan, L.

    2010-01-01

    The vacuum (Casimir) energy in quantum field theory is a problem relevant both to new nanotechnology devices and to dark energy in cosmology. The crucial question is the dependence of the energy on the system geometry. Despite much progress since the first prediction of the Casimir effect in 1948 and its subsequent experimental verification in simple geometries, even the sign of the force in nontrivial situations is still a matter of controversy. Mathematically, vacuum energy fits squarely into the spectral theory of second-order self-adjoint elliptic linear differential operators. Specifically one promising approach is based on the small-t asymptotics of the cylinder kernel e -t√(H) , where H is the self-adjoint operator under study. In contrast with the well-studied heat kernel e -tH , the cylinder kernel depends in a non-local way on the geometry of the problem. We discuss some results by the Louisiana-Oklahoma-Texas collaboration on vacuum energy in model systems, including quantum graphs and two-dimensional cavities. The results may shed light on general questions, including the relationship between vacuum energy and periodic or closed classical orbits, and the contribution to vacuum energy of boundaries, edges, and corners.

  15. Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xue-cun Yang

    2015-01-01

    Full Text Available For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM and kernel function extreme learning machine prediction model (KELM. The results prove that mean square error (MSE for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.

  16. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  17. From protein interactions to functional annotation: graph alignment in Herpes

    Czech Academy of Sciences Publication Activity Database

    Kolář, Michal; Lassig, M.; Berg, J.

    2008-01-01

    Roč. 2, č. 90 (2008), e-e ISSN 1752-0509 Institutional research plan: CEZ:AV0Z50520514 Keywords : graph alignment * functional annotation * protein orthology Subject RIV: EB - Genetics ; Molecular Biology Impact factor: 3.706, year: 2008

  18. Improved microarray-based decision support with graph encoded interactome data.

    Directory of Open Access Journals (Sweden)

    Anneleen Daemen

    Full Text Available In the past, microarray studies have been criticized due to noise and the limited overlap between gene signatures. Prior biological knowledge should therefore be incorporated as side information in models based on gene expression data to improve the accuracy of diagnosis and prognosis in cancer. As prior knowledge, we investigated interaction and pathway information from the human interactome on different aspects of biological systems. By exploiting the properties of kernel methods, relations between genes with similar functions but active in alternative pathways could be incorporated in a support vector machine classifier based on spectral graph theory. Using 10 microarray data sets, we first reduced the number of data sources relevant for multiple cancer types and outcomes. Three sources on metabolic pathway information (KEGG, protein-protein interactions (OPHID and miRNA-gene targeting (microRNA.org outperformed the other sources with regard to the considered class of models. Both fixed and adaptive approaches were subsequently considered to combine the three corresponding classifiers. Averaging the predictions of these classifiers performed best and was significantly better than the model based on microarray data only. These results were confirmed on 6 validation microarray sets, with a significantly improved performance in 4 of them. Integrating interactome data thus improves classification of cancer outcome for the investigated microarray technologies and cancer types. Moreover, this strategy can be incorporated in any kernel method or non-linear version of a non-kernel method.

  19. Discrete Morse functions for graph configuration spaces

    International Nuclear Information System (INIS)

    Sawicki, A

    2012-01-01

    We present an alternative application of discrete Morse theory for two-particle graph configuration spaces. In contrast to previous constructions, which are based on discrete Morse vector fields, our approach is through Morse functions, which have a nice physical interpretation as two-body potentials constructed from one-body potentials. We also give a brief introduction to discrete Morse theory. Our motivation comes from the problem of quantum statistics for particles on networks, for which generalized versions of anyon statistics can appear. (paper)

  20. Semantic Drift in Espresso-style Bootstrapping: Graph-theoretic Analysis and Evaluation in Word Sense Disambiguation

    Science.gov (United States)

    Komachi, Mamoru; Kudo, Taku; Shimbo, Masashi; Matsumoto, Yuji

    Bootstrapping has a tendency, called semantic drift, to select instances unrelated to the seed instances as the iteration proceeds. We demonstrate the semantic drift of Espresso-style bootstrapping has the same root as the topic drift of Kleinberg's HITS, using a simplified graph-based reformulation of bootstrapping. We confirm that two graph-based algorithms, the von Neumann kernels and the regularized Laplacian, can reduce the effect of semantic drift in the task of word sense disambiguation (WSD) on Senseval-3 English Lexical Sample Task. Proposed algorithms achieve superior performance to Espresso and previous graph-based WSD methods, even though the proposed algorithms have less parameters and are easy to calibrate.

  1. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  2. GDM:A New Graph Based Data Model Using Functional Abstractionx

    Institute of Scientific and Technical Information of China (English)

    Sankhayan Choudhury; Nabendu Chaki; Swapan Bhattacharya

    2006-01-01

    In this paper, a Graph-based semantic Data Model (GDM) is proposed with the primary objective of bridging the gap between the human perception of an enterprise and the needs of computing infrastructure to organize information in some particular manner for efficient storage and retrieval. The Graph Data Model (GDM) has been proposed as an alternative data model to combine the advantages of the relational model with the positive features of semantic data models.The proposed GDM offers a structural representation for interacting to the designer, making it always easy to comprehend the complex relations amongst basic data items. GDM allows an entire database to be viewed as a Graph (V, E) in a layered organization. Here, a graph is created in a bottom up fashion where V represents the basic instances of data or a functionally abstracted module, called primary semantic group (PSG) and secondary semantic group (SSG). An edge in the model implies the relationship among the secondary semantic groups. The contents of the lowest layer are the semantically grouped data values in the form of primary semantic groups. The SSGs are nothing but the higher-level abstraction and are created by the method of encapsulation of various PSGs, SSGs and basic data elements. This encapsulation methodology to provide a higher-level abstraction continues generating various secondary semantic groups until the designer thinks that it is sufficient to declare the actual problem domain. GDM, thus, uses standard abstractions available in a semantic data model with a structural representation in terms of a graph. The operations on the data model are formalized in the proposed graph algebra. A Graph Query Language (GQL) is also developed, maintaining similaritywith the widely accepted user-friendly SQL. Finally, the paper also presents the methodology to make this GDM compatible with the distributed environment,and a corresponding query processing technique for distributed environment is also

  3. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  4. Short-term traffic flow prediction model using particle swarm optimization–based combined kernel function-least squares support vector machine combined with chaos theory

    Directory of Open Access Journals (Sweden)

    Qiang Shang

    2016-08-01

    Full Text Available Short-term traffic flow prediction is an important part of intelligent transportation systems research and applications. For further improving the accuracy of short-time traffic flow prediction, a novel hybrid prediction model (multivariate phase space reconstruction–combined kernel function-least squares support vector machine based on multivariate phase space reconstruction and combined kernel function-least squares support vector machine is proposed. The C-C method is used to determine the optimal time delay and the optimal embedding dimension of traffic variables’ (flow, speed, and occupancy time series for phase space reconstruction. The G-P method is selected to calculate the correlation dimension of attractor which is an important index for judging chaotic characteristics of the traffic variables’ series. The optimal input form of combined kernel function-least squares support vector machine model is determined by multivariate phase space reconstruction, and the model’s parameters are optimized by particle swarm optimization algorithm. Finally, case validation is carried out using the measured data of an expressway in Xiamen, China. The experimental results suggest that the new proposed model yields better predictions compared with similar models (combined kernel function-least squares support vector machine, multivariate phase space reconstruction–generalized kernel function-least squares support vector machine, and phase space reconstruction–combined kernel function-least squares support vector machine, which indicates that the new proposed model exhibits stronger prediction ability and robustness.

  5. Spectral fluctuations of quantum graphs

    International Nuclear Information System (INIS)

    Pluhař, Z.; Weidenmüller, H. A.

    2014-01-01

    We prove the Bohigas-Giannoni-Schmit conjecture in its most general form for completely connected simple graphs with incommensurate bond lengths. We show that for graphs that are classically mixing (i.e., graphs for which the spectrum of the classical Perron-Frobenius operator possesses a finite gap), the generating functions for all (P,Q) correlation functions for both closed and open graphs coincide (in the limit of infinite graph size) with the corresponding expressions of random-matrix theory, both for orthogonal and for unitary symmetry

  6. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  7. On the Asymptotic Behavior of the Kernel Function in the Generalized Langevin Equation: A One-Dimensional Lattice Model

    Science.gov (United States)

    Chu, Weiqi; Li, Xiantao

    2018-01-01

    We present some estimates for the memory kernel function in the generalized Langevin equation, derived using the Mori-Zwanzig formalism from a one-dimensional lattice model, in which the particles interactions are through nearest and second nearest neighbors. The kernel function can be explicitly expressed in a matrix form. The analysis focuses on the decay properties, both spatially and temporally, revealing a power-law behavior in both cases. The dependence on the level of coarse-graining is also studied.

  8. Crossed products for interactions and graph algebras

    DEFF Research Database (Denmark)

    Kwasniewski, Bartosz

    2014-01-01

    We consider Exel’s interaction (V,H) over a unital C*-algebra A, such that V(A) and H(A) are hereditary subalgebras of A. For the associated crossed product, we obtain a uniqueness theorem, ideal lattice description, simplicity criterion and a version of Pimsner–Voiculescu exact sequence. These r......We consider Exel’s interaction (V,H) over a unital C*-algebra A, such that V(A) and H(A) are hereditary subalgebras of A. For the associated crossed product, we obtain a uniqueness theorem, ideal lattice description, simplicity criterion and a version of Pimsner–Voiculescu exact sequence....... These results cover the case of crossed products by endomorphisms with hereditary ranges and complemented kernels. As model examples of interactions not coming from endomorphisms we introduce and study in detail interactions arising from finite graphs. The interaction (V,H) associated to a graph E acts...... on the core F_E of the graph algebra C*(E). By describing a partial homeomorphism dual to (V,H) we find the fundamental structure theorems for C*(E), such as Cuntz–Krieger uniqueness theorem, as results concerning reversible noncommutative dynamics on F_E . We also provide a new approach to calculation of K...

  9. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  10. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  11. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  12. Feynman amplitude and the Meijer's function. A unified representation for divergent and convergent graphs

    Energy Technology Data Exchange (ETDEWEB)

    Kucheryavyi, V I

    1974-12-31

    A parametric alpha -representation of Feynman amplitude for any spinor graph, which is expressed in terms of the Meijer's G functions, is obtained. This representation is valid both for divergent and convergent graphs. The available ChisholmNakanishi-Symanzik alpha -representation for convergent scalar graph turns out to be a special of the formula obtained. Besides that, the expression has a number of useful features. This representation automatically removes the infrared divergencies connected with zero photon mass. The expression has a form in which the scale-invariant terms are explicitly separated from the terms breaking the invariance. It is shown by considering the simplest graphs of quantum electrodynamics that this representation keeps gauge invariance and Ward's identity for renormalized amplitudes. (auth)

  13. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  14. Two-Phase Iteration for Value Function Approximation and Hyperparameter Optimization in Gaussian-Kernel-Based Adaptive Critic Design

    Directory of Open Access Journals (Sweden)

    Xin Chen

    2015-01-01

    Full Text Available Adaptive Dynamic Programming (ADP with critic-actor architecture is an effective way to perform online learning control. To avoid the subjectivity in the design of a neural network that serves as a critic network, kernel-based adaptive critic design (ACD was developed recently. There are two essential issues for a static kernel-based model: how to determine proper hyperparameters in advance and how to select right samples to describe the value function. They all rely on the assessment of sample values. Based on the theoretical analysis, this paper presents a two-phase simultaneous learning method for a Gaussian-kernel-based critic network. It is able to estimate the values of samples without infinitively revisiting them. And the hyperparameters of the kernel model are optimized simultaneously. Based on the estimated sample values, the sample set can be refined by adding alternatives or deleting redundances. Combining this critic design with actor network, we present a Gaussian-kernel-based Adaptive Dynamic Programming (GK-ADP approach. Simulations are used to verify its feasibility, particularly the necessity of two-phase learning, the convergence characteristics, and the improvement of the system performance by using a varying sample set.

  15. Fuzzy-based multi-kernel spherical support vector machine for ...

    Indian Academy of Sciences (India)

    In the proposed classifier, we design a new multi-kernel function based on the fuzzy triangular membership function. Finally, a newly developed multi-kernel function is incorporated into the spherical support vector machine to enhance the performance significantly. The experimental results are evaluated and performance is ...

  16. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  17. A Fourier-series-based kernel-independent fast multipole method

    International Nuclear Information System (INIS)

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-01-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  18. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  19. Study on the Calculation of Pebble-Bed Reactor Multiplication Factor As a Function of Fuel Kernel Radius at Various Enrichments

    International Nuclear Information System (INIS)

    Zuhair; Suwoto

    2009-01-01

    Main characteristics of PBR comes from utilization of coated particle fuels dispersed in pebble fuels . Because of vibration, fuel kernel can be grouped into cluster and in these cases, neutronic characteristics of pebble fuel significantly changes . In this study, cluster is modeled structural form consisting of uniform cubic cells with eight neighborhood TRISO particles . Neutronic characteristics was investigated by calculating pebble-bed reactor multiplication factor as a function of fuel kernel radius at various enrichments . The calculation results using MCNP5 code with ENDF/BVI neutron library show that k eff value depends on the average fuel radius and reaches its minimum when all kernels have the same radius, i.e. 0.0280 cm . With this radius, the total kernel surface area achieves maximum value . The dependence of k eff on fuel kernel radius decreases in relation to the increase in uranium enrichment . However, k eff value is not affected by fuel kernel radius when the uranium is 100% enriched . From these result, it can be concluded that, exception of uranium enrichment, the selection of fuel kernel radius should be considered thoroughly in designing a PBR, since this parameter provides significant influences on neutronic characteristics of the reactor. (author)

  20. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Two-Phase Iteration for Value Function Approximation and Hyperparameter Optimization in Gaussian-Kernel-Based Adaptive Critic Design

    OpenAIRE

    Chen, Xin; Xie, Penghuan; Xiong, Yonghua; He, Yong; Wu, Min

    2015-01-01

    Adaptive Dynamic Programming (ADP) with critic-actor architecture is an effective way to perform online learning control. To avoid the subjectivity in the design of a neural network that serves as a critic network, kernel-based adaptive critic design (ACD) was developed recently. There are two essential issues for a static kernel-based model: how to determine proper hyperparameters in advance and how to select right samples to describe the value function. They all rely on the assessment of sa...

  2. Hierarchy of modular graph identities

    Energy Technology Data Exchange (ETDEWEB)

    D’Hoker, Eric; Kaidi, Justin [Mani L. Bhaumik Institute for Theoretical Physics, Department of Physics and Astronomy,University of California,Los Angeles, CA 90095 (United States)

    2016-11-09

    The low energy expansion of Type II superstring amplitudes at genus one is organized in terms of modular graph functions associated with Feynman graphs of a conformal scalar field on the torus. In earlier work, surprising identities between two-loop graphs at all weights, and between higher-loop graphs of weights four and five were constructed. In the present paper, these results are generalized in two complementary directions. First, all identities at weight six and all dihedral identities at weight seven are obtained and proven. Whenever the Laurent polynomial at the cusp is available, the form of these identities confirms the pattern by which the vanishing of the Laurent polynomial governs the full modular identity. Second, the family of modular graph functions is extended to include all graphs with derivative couplings and worldsheet fermions. These extended families of modular graph functions are shown to obey a hierarchy of inhomogeneous Laplace eigenvalue equations. The eigenvalues are calculated analytically for the simplest infinite sub-families and obtained by Maple for successively more complicated sub-families. The spectrum is shown to consist solely of eigenvalues s(s−1) for positive integers s bounded by the weight, with multiplicities which exhibit rich representation-theoretic patterns.

  3. Hierarchy of modular graph identities

    International Nuclear Information System (INIS)

    D’Hoker, Eric; Kaidi, Justin

    2016-01-01

    The low energy expansion of Type II superstring amplitudes at genus one is organized in terms of modular graph functions associated with Feynman graphs of a conformal scalar field on the torus. In earlier work, surprising identities between two-loop graphs at all weights, and between higher-loop graphs of weights four and five were constructed. In the present paper, these results are generalized in two complementary directions. First, all identities at weight six and all dihedral identities at weight seven are obtained and proven. Whenever the Laurent polynomial at the cusp is available, the form of these identities confirms the pattern by which the vanishing of the Laurent polynomial governs the full modular identity. Second, the family of modular graph functions is extended to include all graphs with derivative couplings and worldsheet fermions. These extended families of modular graph functions are shown to obey a hierarchy of inhomogeneous Laplace eigenvalue equations. The eigenvalues are calculated analytically for the simplest infinite sub-families and obtained by Maple for successively more complicated sub-families. The spectrum is shown to consist solely of eigenvalues s(s−1) for positive integers s bounded by the weight, with multiplicities which exhibit rich representation-theoretic patterns.

  4. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  5. Quantum chaos on discrete graphs

    International Nuclear Information System (INIS)

    Smilansky, Uzy

    2007-01-01

    Adapting a method developed for the study of quantum chaos on quantum (metric) graphs (Kottos and Smilansky 1997 Phys. Rev. Lett. 79 4794, Kottos and Smilansky 1999 Ann. Phys., NY 274 76), spectral ζ functions and trace formulae for discrete Laplacians on graphs are derived. This is achieved by expressing the spectral secular equation in terms of the periodic orbits of the graph and obtaining functions which belong to the class of ζ functions proposed originally by Ihara (1966 J. Mat. Soc. Japan 18 219) and expanded by subsequent authors (Stark and Terras 1996 Adv. Math. 121 124, Kotani and Sunada 2000 J. Math. Sci. Univ. Tokyo 7 7). Finally, a model of 'classical dynamics' on the discrete graph is proposed. It is analogous to the corresponding classical dynamics derived for quantum graphs (Kottos and Smilansky 1997 Phys. Rev. Lett. 79 4794, Kottos and Smilansky 1999 Ann. Phys., NY 274 76). (fast track communication)

  6. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  7. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  8. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  9. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  10. Sequential Optimization of Paths in Directed Graphs Relative to Different Cost Functions

    KAUST Repository

    Abubeker, Jewahir Ali; Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    This paper is devoted to the consideration of an algorithm for sequential optimization of paths in directed graphs relative to di_erent cost functions. The considered algorithm is based on an extension of dynamic programming which allows

  11. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  12. A survey of kernel-type estimators for copula and their applications

    Science.gov (United States)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  13. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  14. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  15. Unitarity or asymptotic completeness equations and analytic structure of the S matrix and Green functions

    International Nuclear Information System (INIS)

    Iagolnitzer, D.

    1983-11-01

    Recent axiomatic results on the (non holonomic) analytic structure of the multiparticle S matrix and Green functions are reviewed and related general conjectures are described: (i) formal expansions of Green functions in terms of (holonomic) Feynman-type integrals in which each vertex represents an irreducible kernel, and (ii) ''graph by graph unitarity'' and other discontinuity formulae of the latter. These conjectures are closely linked with unitarity or asymptotic completeness equations, which they yield in a formal sense. In constructive field theory, a direct proof of the first conjecture (together with an independent proof of the second) would thus imply, as a first step, asymptotic completeness in that sense

  16. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  17. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  18. Graph-based Operational Semantics of a Lazy Functional Languages

    DEFF Research Database (Denmark)

    Rose, Kristoffer Høgsbro

    1992-01-01

    Presents Graph Operational Semantics (GOS): a semantic specification formalism based on structural operational semantics and term graph rewriting. Demonstrates the method by specifying the dynamic ...

  19. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    Science.gov (United States)

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  20. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  1. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  2. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  3. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  4. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov (United States)

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  5. Graphs on Surfaces and the Partition Function of String Theory

    OpenAIRE

    Garcia-Islas, J. Manuel

    2007-01-01

    Graphs on surfaces is an active topic of pure mathematics belonging to graph theory. It has also been applied to physics and relates discrete and continuous mathematics. In this paper we present a formal mathematical description of the relation between graph theory and the mathematical physics of discrete string theory. In this description we present problems of the combinatorial world of real importance for graph theorists. The mathematical details of the paper are as follows: There is a com...

  6. Graph theoretical analysis of EEG functional connectivity during music perception.

    Science.gov (United States)

    Wu, Junjie; Zhang, Junsong; Liu, Chu; Liu, Dongwei; Ding, Xiaojun; Zhou, Changle

    2012-11-05

    The present study evaluated the effect of music on large-scale structure of functional brain networks using graph theoretical concepts. While most studies on music perception used Western music as an acoustic stimulus, Guqin music, representative of Eastern music, was selected for this experiment to increase our knowledge of music perception. Electroencephalography (EEG) was recorded from non-musician volunteers in three conditions: Guqin music, noise and silence backgrounds. Phase coherence was calculated in the alpha band and between all pairs of EEG channels to construct correlation matrices. Each resulting matrix was converted into a weighted graph using a threshold, and two network measures: the clustering coefficient and characteristic path length were calculated. Music perception was found to display a higher level mean phase coherence. Over the whole range of thresholds, the clustering coefficient was larger while listening to music, whereas the path length was smaller. Networks in music background still had a shorter characteristic path length even after the correction for differences in mean synchronization level among background conditions. This topological change indicated a more optimal structure under music perception. Thus, prominent small-world properties are confirmed in functional brain networks. Furthermore, music perception shows an increase of functional connectivity and an enhancement of small-world network organizations. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  8. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    Science.gov (United States)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  9. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  10. Graph Theory-Based Brain Connectivity for Automatic Classification of Multiple Sclerosis Clinical Courses

    Directory of Open Access Journals (Sweden)

    Gabriel Kocevar

    2016-10-01

    Full Text Available Purpose: In this work, we introduce a method to classify Multiple Sclerosis (MS patients into four clinical profiles using structural connectivity information. For the first time, we try to solve this question in a fully automated way using a computer-based method. The main goal is to show how the combination of graph-derived metrics with machine learning techniques constitutes a powerful tool for a better characterization and classification of MS clinical profiles.Materials and methods: Sixty-four MS patients (12 Clinical Isolated Syndrome (CIS, 24 Relapsing Remitting (RR, 24 Secondary Progressive (SP, and 17 Primary Progressive (PP along with 26 healthy controls (HC underwent MR examination. T1 and diffusion tensor imaging (DTI were used to obtain structural connectivity matrices for each subject. Global graph metrics, such as density and modularity, were estimated and compared between subjects’ groups. These metrics were further used to classify patients using tuned Support Vector Machine (SVM combined with Radial Basic Function (RBF kernel.Results: When comparing MS patients to HC subjects, a greater assortativity, transitivity and characteristic path length as well as a lower global efficiency were found. Using all graph metrics, the best F-Measures (91.8%, 91.8%, 75.6% and 70.6% were obtained for binary (HC-CIS, CIS-RR, RR-PP and multi-class (CIS-RR-SP classification tasks, respectively. When using only one graph metric, the best F-Measures (83.6%, 88.9% and 70.7% were achieved for modularity with previous binary classification tasks.Conclusion: Based on a simple DTI acquisition associated with structural brain connectivity analysis, this automatic method allowed an accurate classification of different MS patients’ clinical profiles.

  11. Graph Creation, Visualisation and Transformation

    Directory of Open Access Journals (Sweden)

    Maribel Fernández

    2010-03-01

    Full Text Available We describe a tool to create, edit, visualise and compute with interaction nets - a form of graph rewriting systems. The editor, called GraphPaper, allows users to create and edit graphs and their transformation rules using an intuitive user interface. The editor uses the functionalities of the TULIP system, which gives us access to a wealth of visualisation algorithms. Interaction nets are not only a formalism for the specification of graphs, but also a rewrite-based computation model. We discuss graph rewriting strategies and a language to express them in order to perform strategic interaction net rewriting.

  12. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  13. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  14. On The Roman Domination Stable Graphs

    Directory of Open Access Journals (Sweden)

    Hajian Majid

    2017-11-01

    Full Text Available A Roman dominating function (or just RDF on a graph G = (V,E is a function f : V → {0, 1, 2} satisfying the condition that every vertex u for which f(u = 0 is adjacent to at least one vertex v for which f(v = 2. The weight of an RDF f is the value f(V (G = Pu2V (G f(u. The Roman domination number of a graph G, denoted by R(G, is the minimum weight of a Roman dominating function on G. A graph G is Roman domination stable if the Roman domination number of G remains unchanged under removal of any vertex. In this paper we present upper bounds for the Roman domination number in the class of Roman domination stable graphs, improving bounds posed in [V. Samodivkin, Roman domination in graphs: the class RUV R, Discrete Math. Algorithms Appl. 8 (2016 1650049].

  15. Differentiated Learning Environment--A Classroom for Quadratic Equation, Function and Graphs

    Science.gov (United States)

    Dinç, Emre

    2017-01-01

    This paper will cover the design of a learning environment as a classroom regarding the Quadratic Equations, Functions and Graphs. The goal of the learning environment offered in the paper is to design a classroom where students will enjoy the process, use their skills they already have during the learning process, control and plan their learning…

  16. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  17. On Higgs-exchange DIS, physical evolution kernels and fourth-order splitting functions at large x

    International Nuclear Information System (INIS)

    Soar, G.; Vogt, A.; Vermaseren, J.A.M.

    2009-12-01

    We present the coefficient functions for deep-inelastic scattering (DIS) via the exchange of a scalar φ directly coupling only to gluons, such as the Higgs boson in the limit of a very heavy top quark and n f effectively massless light flavours, to the third order in perturbative QCD. The two-loop results are employed to construct the next-to-next-to-leading order physical evolution kernels for the system (F 2 ,F φ ) of flavour-singlet structure functions. The practical relevance of these kernels as an alternative to MS factorization is bedevilled by artificial double logarithms at small values of the scaling variable x, where the large top-mass limit ceases to be appropriate. However, they show an only single-logarithmic enhancement at large x. Conjecturing that this feature persists to the next order also in the present singlet case, the three-loop coefficient functions facilitate exact predictions (backed up by their particular colour structure) of the double-logarithmic contributions to the fourth-order singlet splitting functions, i.e., of the terms (1-x) a ln k (1-x) with k=4,5,6 and k=3,4,5, respectively, for the off-diagonal and diagonal quantities to all powers a in (1-x). (orig.)

  18. Graph Transforming Java Data

    NARCIS (Netherlands)

    de Mol, M.J.; Rensink, Arend; Hunt, James J.

    This paper introduces an approach for adding graph transformation-based functionality to existing JAVA programs. The approach relies on a set of annotations to identify the intended graph structure, as well as on user methods to manipulate that structure, within the user’s own JAVA class

  19. Graph of growth data - The Rice Growth Monitoring for The Phenotypic Functional Analysis | LSDB Archive [Life Science Database Archive metadata

    Lifescience Database Archive (English)

    Full Text Available List Contact us The Rice Growth Monitoring for The Phenotypic Functional Analysis Graph of growth data Data ...detail Data name Graph of growth data DOI 10.18908/lsdba.nbdc00945-003 Description of data contents The grap...h of chronological changes in root, coleoptile, the first leaf, and the second leaf. Data file File name: growth..._data_graph.zip File URL: ftp://ftp.biosciencedbc.jp/archive/agritogo-rice-phenome/LATEST/data/growth...e Update History of This Database Site Policy | Contact Us Graph of growth data -

  20. One loop partition function of six dimensional conformal gravity using heat kernel on AdS

    Energy Technology Data Exchange (ETDEWEB)

    Lovreković, Iva [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstrasse 8-10/136, A-1040 Vienna (Austria)

    2016-10-13

    We compute the heat kernel for the Laplacians of symmetric transverse traceless fields of arbitrary spin on the AdS background in even number of dimensions using the group theoretic approach introduced in http://dx.doi.org/10.1007/JHEP11(2011)010 and apply it on the partition function of six dimensional conformal gravity. The obtained partition function consists of the Einstein gravity, conformal ghost and two modes that contain mass.

  1. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  2. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  3. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  4. On convergence of kernel learning estimators

    NARCIS (Netherlands)

    Norkin, V.I.; Keyzer, M.A.

    2009-01-01

    The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability

  5. Hardness and softness reactivity kernels within the spin-polarized density-functional theory

    International Nuclear Information System (INIS)

    Chamorro, Eduardo; De Proft, Frank; Geerlings, Paul

    2005-01-01

    Generalized hardness and softness reactivity kernels are defined within a spin-polarized density-functional theory (SP-DFT) conceptual framework. These quantities constitute the basis for the global, local (i.e., r-position dependent), and nonlocal (i.e., r and r ' -position dependents) indices devoted to the treatment of both charge-transfer and spin-polarization processes in such a reactivity framework. The exact relationships between these descriptors within a SP-DFT framework are derived and the implications for chemical reactivity in such context are outlined

  6. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  7. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  8. Dimensional feature weighting utilizing multiple kernel learning for single-channel talker location discrimination using the acoustic transfer function.

    Science.gov (United States)

    Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo

    2013-02-01

    This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.

  9. On revealing graph cycles via boundary measurements

    International Nuclear Information System (INIS)

    Belishev, M I; Wada, N

    2009-01-01

    This paper deals with boundary value inverse problems on a metric graph, the structure of the graph being assumed unknown. The question under consideration is how to detect from the dynamical and/or spectral inverse data whether the graph contains cycles (is not a tree). For any graph Ω, the dynamical as well as spectral boundary inverse data determine the so-called wave diameter d w : H -1 (Ω) → R defined on functionals supported in the graph. The known fact is that if Ω is a tree then d w ≥ 0 holds and, in this case, the inverse data determine Ω up to isometry. A graph Ω is said to be coordinate if the functions {dist Ω (., γ)} γin∂Ω constitute a coordinate system on Ω. For such graphs, we propose a procedure, which reveals the presence/absence of cycles. The hypothesis is that Ω contains cycles if and only if d w takes negative values. We do not justify this hypothesis in the general case but reduce it to a certain special class of graphs (suns)

  10. Hierarchical organisation of causal graphs

    International Nuclear Information System (INIS)

    Dziopa, P.

    1993-01-01

    This paper deals with the design of a supervision system using a hierarchy of models formed by graphs, in which the variables are the nodes and the causal relations between the variables of the arcs. To obtain a representation of the variables evolutions which contains only the relevant features of their real evolutions, the causal relations are completed with qualitative transfer functions (QTFs) which produce roughly the behaviour of the classical transfer functions. Major improvements have been made in the building of the hierarchical organization. First, the basic variables of the uppermost level and the causal relations between them are chosen. The next graph is built by adding intermediary variables to the upper graph. When the undermost graph has been built, the transfer functions parameters corresponding to its causal relations are identified. The second task consists in the upwelling of the information from the undermost graph to the uppermost one. A fusion procedure of the causal relations has been designed to compute the QFTs relevant for each level. This procedure aims to reduce the number of parameters needed to represent an evolution at a high level of abstraction. These techniques have been applied to the hierarchical modelling of nuclear process. (authors). 8 refs., 12 figs

  11. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Combined Kernel-Based BDT-SMO Classification of Hyperspectral Fused Images

    Directory of Open Access Journals (Sweden)

    Fenghua Huang

    2014-01-01

    Full Text Available To solve the poor generalization and flexibility problems that single kernel SVM classifiers have while classifying combined spectral and spatial features, this paper proposed a solution to improve the classification accuracy and efficiency of hyperspectral fused images: (1 different radial basis kernel functions (RBFs are employed for spectral and textural features, and a new combined radial basis kernel function (CRBF is proposed by combining them in a weighted manner; (2 the binary decision tree-based multiclass SMO (BDT-SMO is used in the classification of hyperspectral fused images; (3 experiments are carried out, where the single radial basis function- (SRBF- based BDT-SMO classifier and the CRBF-based BDT-SMO classifier are used, respectively, to classify the land usages of hyperspectral fused images, and genetic algorithms (GA are used to optimize the kernel parameters of the classifiers. The results show that, compared with SRBF, CRBF-based BDT-SMO classifiers display greater classification accuracy and efficiency.

  13. Optimal kernel shape and bandwidth for atomistic support of continuum stress

    International Nuclear Information System (INIS)

    Ulz, Manfred H; Moran, Sean J

    2013-01-01

    The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)

  14. Progressive image denoising through hybrid graph Laplacian regularization: a unified framework.

    Science.gov (United States)

    Liu, Xianming; Zhai, Deming; Zhao, Debin; Zhai, Guangtao; Gao, Wen

    2014-04-01

    Recovering images from corrupted observations is necessary for many real-world applications. In this paper, we propose a unified framework to perform progressive image recovery based on hybrid graph Laplacian regularized regression. We first construct a multiscale representation of the target image by Laplacian pyramid, then progressively recover the degraded image in the scale space from coarse to fine so that the sharp edges and texture can be eventually recovered. On one hand, within each scale, a graph Laplacian regularization model represented by implicit kernel is learned, which simultaneously minimizes the least square error on the measured samples and preserves the geometrical structure of the image data space. In this procedure, the intrinsic manifold structure is explicitly considered using both measured and unmeasured samples, and the nonlocal self-similarity property is utilized as a fruitful resource for abstracting a priori knowledge of the images. On the other hand, between two successive scales, the proposed model is extended to a projected high-dimensional feature space through explicit kernel mapping to describe the interscale correlation, in which the local structure regularity is learned and propagated from coarser to finer scales. In this way, the proposed algorithm gradually recovers more and more image details and edges, which could not been recovered in previous scale. We test our algorithm on one typical image recovery task: impulse noise removal. Experimental results on benchmark test images demonstrate that the proposed method achieves better performance than state-of-the-art algorithms.

  15. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  16. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  18. Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency.

    Science.gov (United States)

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-05-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. General graviton exchange graph for four-point functions in the AdS/CFT correspondence

    International Nuclear Information System (INIS)

    Leonhardt, Thorsten; Ruehl, Werner

    2003-01-01

    In this paper, we explicitly compute the graviton exchange graph for scalar fields with arbitrary conformal dimension Δ in arbitrary spacetime dimension d. This results in an analytical function in Δ as well as in d

  20. Software for Graph Analysis and Visualization

    Directory of Open Access Journals (Sweden)

    M. I. Kolomeychenko

    2014-01-01

    Full Text Available This paper describes the software for graph storage, analysis and visualization. The article presents a comparative analysis of existing software for analysis and visualization of graphs, describes the overall architecture of application and basic principles of construction and operation of the main modules. Furthermore, a description of the developed graph storage oriented to storage and processing of large-scale graphs is presented. The developed algorithm for finding communities and implemented algorithms of autolayouts of graphs are the main functionality of the product. The main advantage of the developed software is high speed processing of large size networks (up to millions of nodes and links. Moreover, the proposed graph storage architecture is unique and has no analogues. The developed approaches and algorithms are optimized for operating with big graphs and have high productivity.

  1. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  2. Using Graph and Vertex Entropy to Compare Empirical Graphs with Theoretical Graph Models

    Directory of Open Access Journals (Sweden)

    Tomasz Kajdanowicz

    2016-09-01

    Full Text Available Over the years, several theoretical graph generation models have been proposed. Among the most prominent are: the Erdős–Renyi random graph model, Watts–Strogatz small world model, Albert–Barabási preferential attachment model, Price citation model, and many more. Often, researchers working with real-world data are interested in understanding the generative phenomena underlying their empirical graphs. They want to know which of the theoretical graph generation models would most probably generate a particular empirical graph. In other words, they expect some similarity assessment between the empirical graph and graphs artificially created from theoretical graph generation models. Usually, in order to assess the similarity of two graphs, centrality measure distributions are compared. For a theoretical graph model this means comparing the empirical graph to a single realization of a theoretical graph model, where the realization is generated from the given model using an arbitrary set of parameters. The similarity between centrality measure distributions can be measured using standard statistical tests, e.g., the Kolmogorov–Smirnov test of distances between cumulative distributions. However, this approach is both error-prone and leads to incorrect conclusions, as we show in our experiments. Therefore, we propose a new method for graph comparison and type classification by comparing the entropies of centrality measure distributions (degree centrality, betweenness centrality, closeness centrality. We demonstrate that our approach can help assign the empirical graph to the most similar theoretical model using a simple unsupervised learning method.

  3. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  4. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  5. PERCEVAL v4.0: a new PC code for gamma radiation studies based upon most recent development about point kernel

    Energy Technology Data Exchange (ETDEWEB)

    Bindel, Laurent; Clouet, Laurent; Castanier, Eric; Bonnet, Jerome; Fleury, Guillaume; Vermuse, Manuel; Gamess, Andre; Lejeune, Eric [Societe Generale pour les techniques Nouvelles, Saint Quentin en Yvelines (France)

    2000-03-01

    The present paper presents the capabilities of a new code named PERCEVAL v4.0 and based on the new point kernel described in an attached issue. Two linked codes named SPECTRE{sub G} and GRAPH{sub 3}D are part of the code package in order to establish the energetic source term and visualize the tri-dimensional scene respectively. (author)

  6. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  7. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  8. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  9. Reproducing Kernels and Coherent States on Julia Sets

    Energy Technology Data Exchange (ETDEWEB)

    Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr

    2007-11-15

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.

  10. Reproducing Kernels and Coherent States on Julia Sets

    International Nuclear Information System (INIS)

    Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.

    2007-01-01

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems

  11. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    Science.gov (United States)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  12. Classical dynamics on graphs

    International Nuclear Information System (INIS)

    Barra, F.; Gaspard, P.

    2001-01-01

    We consider the classical evolution of a particle on a graph by using a time-continuous Frobenius-Perron operator that generalizes previous propositions. In this way, the relaxation rates as well as the chaotic properties can be defined for the time-continuous classical dynamics on graphs. These properties are given as the zeros of some periodic-orbit zeta functions. We consider in detail the case of infinite periodic graphs where the particle undergoes a diffusion process. The infinite spatial extension is taken into account by Fourier transforms that decompose the observables and probability densities into sectors corresponding to different values of the wave number. The hydrodynamic modes of diffusion are studied by an eigenvalue problem of a Frobenius-Perron operator corresponding to a given sector. The diffusion coefficient is obtained from the hydrodynamic modes of diffusion and has the Green-Kubo form. Moreover, we study finite but large open graphs that converge to the infinite periodic graph when their size goes to infinity. The lifetime of the particle on the open graph is shown to correspond to the lifetime of a system that undergoes a diffusion process before it escapes

  13. Functional classification of protein structures by local structure matching in graph representation.

    Science.gov (United States)

    Mills, Caitlyn L; Garg, Rohan; Lee, Joslynn S; Tian, Liang; Suciu, Alexandru; Cooperman, Gene; Beuning, Penny J; Ondrechen, Mary Jo

    2018-03-31

    As a result of high-throughput protein structure initiatives, over 14,400 protein structures have been solved by structural genomics (SG) centers and participating research groups. While the totality of SG data represents a tremendous contribution to genomics and structural biology, reliable functional information for these proteins is generally lacking. Better functional predictions for SG proteins will add substantial value to the structural information already obtained. Our method described herein, Graph Representation of Active Sites for Prediction of Function (GRASP-Func), predicts quickly and accurately the biochemical function of proteins by representing residues at the predicted local active site as graphs rather than in Cartesian coordinates. We compare the GRASP-Func method to our previously reported method, structurally aligned local sites of activity (SALSA), using the ribulose phosphate binding barrel (RPBB), 6-hairpin glycosidase (6-HG), and Concanavalin A-like Lectins/Glucanase (CAL/G) superfamilies as test cases. In each of the superfamilies, SALSA and the much faster method GRASP-Func yield similar correct classification of previously characterized proteins, providing a validated benchmark for the new method. In addition, we analyzed SG proteins using our SALSA and GRASP-Func methods to predict function. Forty-one SG proteins in the RPBB superfamily, nine SG proteins in the 6-HG superfamily, and one SG protein in the CAL/G superfamily were successfully classified into one of the functional families in their respective superfamily by both methods. This improved, faster, validated computational method can yield more reliable predictions of function that can be used for a wide variety of applications by the community. © 2018 The Authors Protein Science published by Wiley Periodicals, Inc. on behalf of The Protein Society.

  14. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  15. Effective and efficient Grassfinch kernel for SVM classification and its application to recognition based on image set

    International Nuclear Information System (INIS)

    Du, Genyuan; Tian, Shengli; Qiu, Yingyu; Xu, Chunyan

    2016-01-01

    This paper presents an effective and efficient kernel approach to recognize image set which is represented as a point on extended Grassmannian manifold. Several recent studies focus on the applicability of discriminant analysis on Grassmannian manifold and suffer from not obtaining the inherent nonlinear structure of the data itself. Therefore, we propose an extension of Grassmannian manifold to address this issue. Instead of using a linear data embedding with PCA, we develop a non-linear data embedding of such manifold using kernel PCA. This paper mainly consider three folds: 1) introduce a non-linear data embedding of extended Grassmannian manifold, 2) derive a distance metric of Grassmannian manifold, 3) develop an effective and efficient Grassmannian kernel for SVM classification. The extended Grassmannian manifold naturally arises in the application to recognition based on image set, such as face and object recognition. Experiments on several standard databases show better classification accuracy. Furthermore, experimental results indicate that our proposed approach significantly reduces time complexity in comparison to graph embedding discriminant analysis.

  16. A relationship between Gel'fand-Levitan and Marchenko kernels

    International Nuclear Information System (INIS)

    Kirst, T.; Von Geramb, H.V.; Amos, K.A.

    1989-01-01

    An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs

  17. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  18. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    Science.gov (United States)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  19. Mechatronic modeling and simulation using bond graphs

    CERN Document Server

    Das, Shuvra

    2009-01-01

    Introduction to Mechatronics and System ModelingWhat Is Mechatronics?What Is a System and Why Model Systems?Mathematical Modeling Techniques Used in PracticeSoftwareBond Graphs: What Are They?Engineering SystemsPortsGeneralized VariablesBond GraphsBasic Components in SystemsA Brief Note about Bond Graph Power DirectionsSummary of Bond Direction RulesDrawing Bond Graphs for Simple Systems: Electrical and MechanicalSimplification Rules for Junction StructureDrawing Bond Graphs for Electrical SystemsDrawing Bond Graphs for Mechanical SystemsCausalityDrawing Bond Graphs for Hydraulic and Electronic Components and SystemsSome Basic Properties and Concepts for FluidsBond Graph Model of Hydraulic SystemsElectronic SystemsDeriving System Equations from Bond GraphsSystem VariablesDeriving System EquationsTackling Differential CausalityAlgebraic LoopsSolution of Model Equations and Their InterpretationZeroth Order SystemsFirst Order SystemsSecond Order SystemTransfer Functions and Frequency ResponsesNumerical Solution ...

  20. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  1. Graph topologies on closed multifunctions

    Directory of Open Access Journals (Sweden)

    Giuseppe Di Maio

    2003-10-01

    Full Text Available In this paper we study function space topologies on closed multifunctions, i.e. closed relations on X x Y using various hypertopologies. The hypertopologies are in essence, graph topologies i.e topologies on functions considered as graphs which are subsets of X x Y . We also study several topologies, including one that is derived from the Attouch-Wets filter on the range. We state embedding theorems which enable us to generalize and prove some recent results in the literature with the use of known results in the hyperspace of the range space and in the function space topologies of ordinary functions.

  2. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    Science.gov (United States)

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  3. Abnormalities of functional brain networks in pathological gambling: a graph-theoretical approach

    Science.gov (United States)

    Tschernegg, Melanie; Crone, Julia S.; Eigenberger, Tina; Schwartenbeck, Philipp; Fauth-Bühler, Mira; Lemènager, Tagrid; Mann, Karl; Thon, Natasha; Wurst, Friedrich M.; Kronbichler, Martin

    2013-01-01

    Functional neuroimaging studies of pathological gambling (PG) demonstrate alterations in frontal and subcortical regions of the mesolimbic reward system. However, most investigations were performed using tasks involving reward processing or executive functions. Little is known about brain network abnormalities during task-free resting state in PG. In the present study, graph-theoretical methods were used to investigate network properties of resting state functional magnetic resonance imaging data in PG. We compared 19 patients with PG to 19 healthy controls (HCs) using the Graph Analysis Toolbox (GAT). None of the examined global metrics differed between groups. At the nodal level, pathological gambler showed a reduced clustering coefficient in the left paracingulate cortex and the left juxtapositional lobe (supplementary motor area, SMA), reduced local efficiency in the left SMA, as well as an increased node betweenness for the left and right paracingulate cortex and the left SMA. At an uncorrected threshold level, the node betweenness in the left inferior frontal gyrus was decreased and increased in the caudate. Additionally, increased functional connectivity between fronto-striatal regions and within frontal regions has also been found for the gambling patients. These findings suggest that regions associated with the reward system demonstrate reduced segregation but enhanced integration while regions associated with executive functions demonstrate reduced integration. The present study makes evident that PG is also associated with abnormalities in the topological network structure of the brain during rest. Since alterations in PG cannot be explained by direct effects of abused substances on the brain, these findings will be of relevance for understanding functional connectivity in other addictive disorders. PMID:24098282

  4. Abnormalities of Functional Brain Networks in Pathological Gambling: A Graph-Theoretical Approach

    Directory of Open Access Journals (Sweden)

    Melanie eTschernegg

    2013-09-01

    Full Text Available Functional neuroimaging studies of pathological gambling demonstrate alterations in frontal and subcortical regions of the mesolimbic reward system. However, most investigations were performed using tasks involving reward processing or executive functions. Little is known about brain network abnormalities during task-free resting state in pathological gambling. In the present study, graph-theoretical methods were used to investigate network properties of resting state functional MRI data in pathological gambling. We compared 19 patients with pathological gambling to 19 healthy controls using the Graph Analysis Toolbox (GAT. None of the examined global metrics differed between groups. At the nodal level, pathological gambler showed a reduced clustering coefficient in the left paracingulate cortex and the left juxtapositional lobe (SMA, reduced local efficiency in the left SMA, as well as an increased node betweenness for the left and right paracingulate cortex and the left SMA. At an uncorrected threshold level, the node betweenness in the left inferior frontal gyrus was decreased and increased in the caudate. Additionally, increased functional connectivity between fronto-striatal regions and within frontal regions has also been found for the gambling patients.These findings suggest that regions associated with the reward system demonstrate reduced segregation but enhanced integration while regions associated with executive functions demonstrate reduced integration. The present study makes evident that pathological gambling is also associated with abnormalities in the topological network structure of the brain during rest. Since alterations in pathological gambling cannot be explained by direct effects of abused substances on the brain, these findings will be of relevance for understanding functional connectivity in other addictive disorders.

  5. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  6. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  7. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  8. Prediction of heterodimeric protein complexes from weighted protein-protein interaction networks using novel features and kernel functions.

    Directory of Open Access Journals (Sweden)

    Peiying Ruan

    Full Text Available Since many proteins express their functional activity by interacting with other proteins and forming protein complexes, it is very useful to identify sets of proteins that form complexes. For that purpose, many prediction methods for protein complexes from protein-protein interactions have been developed such as MCL, MCODE, RNSC, PCP, RRW, and NWE. These methods have dealt with only complexes with size of more than three because the methods often are based on some density of subgraphs. However, heterodimeric protein complexes that consist of two distinct proteins occupy a large part according to several comprehensive databases of known complexes. In this paper, we propose several feature space mappings from protein-protein interaction data, in which each interaction is weighted based on reliability. Furthermore, we make use of prior knowledge on protein domains to develop feature space mappings, domain composition kernel and its combination kernel with our proposed features. We perform ten-fold cross-validation computational experiments. These results suggest that our proposed kernel considerably outperforms the naive Bayes-based method, which is the best existing method for predicting heterodimeric protein complexes.

  9. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  10. A graph rewriting programming language for graph drawing

    OpenAIRE

    Rodgers, Peter

    1998-01-01

    This paper describes Grrr, a prototype visual graph drawing tool. Previously there were no visual languages for programming graph drawing algorithms despite the inherently visual nature of the process. The languages which gave a diagrammatic view of graphs were not computationally complete and so could not be used to implement complex graph drawing algorithms. Hence current graph drawing tools are all text based. Recent developments in graph rewriting systems have produced computationally com...

  11. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    Science.gov (United States)

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Differences in graph theory functional connectivity in left and right temporal lobe epilepsy.

    Science.gov (United States)

    Chiang, Sharon; Stern, John M; Engel, Jerome; Levin, Harvey S; Haneef, Zulfi

    2014-12-01

    To investigate lateralized differences in limbic system functional connectivity between left and right temporal lobe epilepsy (TLE) using graph theory. Interictal resting state fMRI was performed in 14 left TLE patients, 11 right TLE patients, and 12 controls. Graph theory analysis of 10 bilateral limbic regions of interest was conducted. Changes in edgewise functional connectivity, network topology, and regional topology were quantified, and then left and right TLE were compared. Limbic edgewise functional connectivity was predominantly reduced in both left and right TLE. More regional connections were reduced in right TLE, most prominently involving reduced interhemispheric connectivity between the bilateral insula and bilateral hippocampi. A smaller number of limbic connections were increased in TLE, more so in left than in right TLE. Topologically, the most pronounced change was a reduction in average network betweenness centrality and concurrent increase in left hippocampal betweenness centrality in right TLE. In contrast, left TLE exhibited a weak trend toward increased right hippocampal betweenness centrality, with no change in average network betweenness centrality. Limbic functional connectivity is predominantly reduced in both left and right TLE, with more pronounced reductions in right TLE. In contrast, left TLE exhibits both edgewise and topological changes that suggest a tendency toward reorganization. Network changes in TLE and lateralized differences thereof may have important diagnostic and prognostic implications. Published by Elsevier B.V.

  13. Reprint of "Two-stage sparse coding of region covariance via Log-Euclidean kernels to detect saliency".

    Science.gov (United States)

    Zhang, Ying-Ying; Yang, Cai; Zhang, Ping

    2017-08-01

    In this paper, we present a novel bottom-up saliency detection algorithm from the perspective of covariance matrices on a Riemannian manifold. Each superpixel is described by a region covariance matrix on Riemannian Manifolds. We carry out a two-stage sparse coding scheme via Log-Euclidean kernels to extract salient objects efficiently. In the first stage, given background dictionary on image borders, sparse coding of each region covariance via Log-Euclidean kernels is performed. The reconstruction error on the background dictionary is regarded as the initial saliency of each superpixel. In the second stage, an improvement of the initial result is achieved by calculating reconstruction errors of the superpixels on foreground dictionary, which is extracted from the first stage saliency map. The sparse coding in the second stage is similar to the first stage, but is able to effectively highlight the salient objects uniformly from the background. Finally, three post-processing methods-highlight-inhibition function, context-based saliency weighting, and the graph cut-are adopted to further refine the saliency map. Experiments on four public benchmark datasets show that the proposed algorithm outperforms the state-of-the-art methods in terms of precision, recall and mean absolute error, and demonstrate the robustness and efficiency of the proposed method. Copyright © 2017 Elsevier Ltd. All rights reserved.

  14. Cyclic graphs and Apery's theorem

    International Nuclear Information System (INIS)

    Sorokin, V N

    2002-01-01

    This is a survey of results about the behaviour of Hermite-Pade approximants for graphs of Markov functions, and a survey of interpolation problems leading to Apery's result about the irrationality of the value ζ(3) of the Riemann zeta function. The first example is given of a cyclic graph for which the Hermite-Pade problem leads to Apery's theorem. Explicit formulae for solutions are obtained, namely, Rodrigues' formulae and integral representations. The asymptotic behaviour of the approximants is studied, and recurrence formulae are found

  15. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  16. Supporting Generative Thinking about Number Lines, the Cartesian Plane, and Graphs of Linear Functions

    Science.gov (United States)

    Earnest, Darrell Steven

    2012-01-01

    This dissertation explores fifth and eighth grade students' interpretations of three kinds of mathematical representations: number lines, the Cartesian plane, and graphs of linear functions. Two studies were conducted. In Study 1, I administered the paper-and-pencil Linear Representations Assessment (LRA) to examine students'…

  17. Spatial context driven manifold learning for hyperspectral image classification

    CSIR Research Space (South Africa)

    Zhang, Y

    2014-06-01

    Full Text Available spatially induced disjoint classes whose neighborhood relations are difficult to capture using traditional graph based embedding techniques. Robust parameter estimation is a challenge in traditional kernel functions that compute neighborhood graphs e...

  18. Functional neural network analysis in frontotemporal dementia and Alzheimer's disease using EEG and graph theory

    Directory of Open Access Journals (Sweden)

    van der Flier Wiesje M

    2009-08-01

    Full Text Available Abstract Background Although a large body of knowledge about both brain structure and function has been gathered over the last decades, we still have a poor understanding of their exact relationship. Graph theory provides a method to study the relation between network structure and function, and its application to neuroscientific data is an emerging research field. We investigated topological changes in large-scale functional brain networks in patients with Alzheimer's disease (AD and frontotemporal lobar degeneration (FTLD by means of graph theoretical analysis of resting-state EEG recordings. EEGs of 20 patients with mild to moderate AD, 15 FTLD patients, and 23 non-demented individuals were recorded in an eyes-closed resting-state. The synchronization likelihood (SL, a measure of functional connectivity, was calculated for each sensor pair in 0.5–4 Hz, 4–8 Hz, 8–10 Hz, 10–13 Hz, 13–30 Hz and 30–45 Hz frequency bands. The resulting connectivity matrices were converted to unweighted graphs, whose structure was characterized with several measures: mean clustering coefficient (local connectivity, characteristic path length (global connectivity and degree correlation (network 'assortativity'. All results were normalized for network size and compared with random control networks. Results In AD, the clustering coefficient decreased in the lower alpha and beta bands (p Conclusion With decreasing local and global connectivity parameters, the large-scale functional brain network organization in AD deviates from the optimal 'small-world' network structure towards a more 'random' type. This is associated with less efficient information exchange between brain areas, supporting the disconnection hypothesis of AD. Surprisingly, FTLD patients show changes in the opposite direction, towards a (perhaps excessively more 'ordered' network structure, possibly reflecting a different underlying pathophysiological process.

  19. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    Science.gov (United States)

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  20. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  1. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  2. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  3. Geographically Weighted Regression Model with Kernel Bisquare and Tricube Weighted Function on Poverty Percentage Data in Central Java Province

    Science.gov (United States)

    Nugroho, N. F. T. A.; Slamet, I.

    2018-05-01

    Poverty is a socio-economic condition of a person or group of people who can not fulfil their basic need to maintain and develop a dignified life. This problem still cannot be solved completely in Central Java Province. Currently, the percentage of poverty in Central Java is 13.32% which is higher than the national poverty rate which is 11.13%. In this research, data of percentage of poor people in Central Java Province has been analyzed through geographically weighted regression (GWR). The aim of this research is therefore to model poverty percentage data in Central Java Province using GWR with weighted function of kernel bisquare, and tricube. As the results, we obtained GWR model with bisquare and tricube kernel weighted function on poverty percentage data in Central Java province. From the GWR model, there are three categories of region which are influenced by different of significance factors.

  4. Graph Aggregation

    NARCIS (Netherlands)

    Endriss, U.; Grandi, U.

    Graph aggregation is the process of computing a single output graph that constitutes a good compromise between several input graphs, each provided by a different source. One needs to perform graph aggregation in a wide variety of situations, e.g., when applying a voting rule (graphs as preference

  5. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  6. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  7. Spatial Modeling Of Infant Mortality Rate In South Central Timor Regency Using GWLR Method With Adaptive Bisquare Kernel And Gaussian Kernel

    Directory of Open Access Journals (Sweden)

    Teguh Prawono Sabat

    2017-08-01

    Full Text Available Geographically Weighted Logistic Regression (GWLR was regression model consider the spatial factor, which could be used to analyze the IMR. The number of Infant Mortality as big as 100 cases in 2015 or 12 per 1000 live birth in South Central Timor Regency. The aim of this study was to determine the best modeling of GWLR with fixed weighting function and Adaptive Gaussian Kernel in the case of infant mortality in South Central Timor District in 2015. The response variable (Y in this study was a case of infant mortality, while variable predictor was the percentage of neonatal first visit (KN1 (X1, the percentage of neonatal visit 3 times (Complete KN (X2, the percentage of pregnant get Fe tablet (X3, percentage of poor families pre prosperous (X4. This was a non-reactive study, which is a measurement which individuals surveyed did not realize that they are part of a study, with analysis unit in 32 sub-districts of South Central Timor Districts. Data analysis used open source program that was Excel, R program, Quantum GIS and GWR4. The best GWLR spatial modeling with Adaptive Gaussian Kernel weighting function, a global model parameters GWLR Adaptive Gaussian Kernel weighting function obtained by g (x = 0.941086 - 0,892506X4, GWLR local models with adaptive Kernel bisquare weighting function in the 13 Districts were obtained g(x = 0 − 0X4, factors that affect the cases of infant mortality in 13 sub-districts of South Central Timor Regency in 2015 was the percentage of poor families pre prosperous.

  8. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  9. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  10. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  11. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  12. Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Bo Lijun [Xidian University, Department of Mathematics (China); Wang Yongjin [Nankai University, School of Business (China); Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn [Nanjing University, School of Management and Engineering (China)

    2013-08-01

    We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.

  13. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  14. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  15. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    Science.gov (United States)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  16. Proxy Graph: Visual Quality Metrics of Big Graph Sampling.

    Science.gov (United States)

    Nguyen, Quan Hoang; Hong, Seok-Hee; Eades, Peter; Meidiana, Amyra

    2017-06-01

    Data sampling has been extensively studied for large scale graph mining. Many analyses and tasks become more efficient when performed on graph samples of much smaller size. The use of proxy objects is common in software engineering for analysis and interaction with heavy objects or systems. In this paper, we coin the term 'proxy graph' and empirically investigate how well a proxy graph visualization can represent a big graph. Our investigation focuses on proxy graphs obtained by sampling; this is one of the most common proxy approaches. Despite the plethora of data sampling studies, this is the first evaluation of sampling in the context of graph visualization. For an objective evaluation, we propose a new family of quality metrics for visual quality of proxy graphs. Our experiments cover popular sampling techniques. Our experimental results lead to guidelines for using sampling-based proxy graphs in visualization.

  17. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  18. Dynamic MLD analysis with flow graphs

    International Nuclear Information System (INIS)

    Jenab, K.; Sarfaraz, A.; Dhillon, B.S.; Seyed Hosseini, S.M.

    2012-01-01

    Master Logic Diagram (MLD) depicts the interrelationships among the independent functions and dependent support functions. Using MLD, the manner in which all functions, sub-functions interact to achieve the overall system objective can be investigated. This paper reports a probabilistic model to analyze an MLD by translating the interrelationships to a graph model. The proposed model uses the flow-graph concept and Moment Generating Function (MGF) to analyze the dependency matrix representing the MLD with embedded self-healing function/sub-functions. The functions/sub-functions are featured by failure detection and recovery mechanisms. The newly developed model provides the probability of the system failure, and system mean and standard deviation time to failure in the MLD. An illustrative example is demonstrated to present the application of the model.

  19. Particle transport in breathing quantum graph

    International Nuclear Information System (INIS)

    Matrasulov, D.U.; Yusupov, J.R.; Sabirov, K.K.; Sobirov, Z.A.

    2012-01-01

    Full text: Particle transport in nanoscale networks and discrete structures is of fundamental and practical importance. Usually such systems are modeled by so-called quantum graphs, the systems attracting much attention in physics and mathematics during past two decades [1-5]. During last two decades quantum graphs found numerous applications in modeling different discrete structures and networks in nanoscale and mesoscopic physics (e.g., see reviews [1-3]). Despite considerable progress made in the study of particle dynamics most of the problems deal with unperturbed case and the case of time-dependent perturbation has not yet be explored. In this work we treat particle dynamics for quantum star graph with time-dependent bonds. In particular, we consider harmonically breathing quantum star graphs, the cases of monotonically contracting and expanding graphs. The latter can be solved exactly analytically. Edge boundaries are considered to be time-dependent, while branching point is assumed to be fixed. Quantum dynamics of a particle in such graphs is studied by solving Schrodinger equation with time-dependent boundary conditions given on a star graph. Time-dependence of the average kinetic energy is analyzed. Space-time evolution of the Gaussian wave packet is treated for harmonically breathing star graph. It is found that for certain frequencies energy is a periodic function of time, while for others it can be non-monotonically growing function of time. Such a feature can be caused by possible synchronization of the particles motion and the motions of the moving edges of graph bonds. (authors) References: [1] Tsampikos Kottos and Uzy Smilansky, Ann. Phys., 76, 274 (1999). [2] Sven Gnutzmann and Uzy Smilansky, Adv. Phys. 55, 527 (2006). [3] S. GnutzmannJ.P. Keating, F. Piotet, Ann. Phys., 325, 2595 (2010). [4] P.Exner, P.Seba, P.Stovicek, J. Phys. A: Math. Gen. 21, 4009 (1988). [5] J. Boman, P. Kurasov, Adv. Appl. Math., 35, 58 (2005)

  20. Semantic graphs and associative memories

    Science.gov (United States)

    Pomi, Andrés; Mizraji, Eduardo

    2004-12-01

    Graphs have been increasingly utilized in the characterization of complex networks from diverse origins, including different kinds of semantic networks. Human memories are associative and are known to support complex semantic nets; these nets are represented by graphs. However, it is not known how the brain can sustain these semantic graphs. The vision of cognitive brain activities, shown by modern functional imaging techniques, assigns renewed value to classical distributed associative memory models. Here we show that these neural network models, also known as correlation matrix memories, naturally support a graph representation of the stored semantic structure. We demonstrate that the adjacency matrix of this graph of associations is just the memory coded with the standard basis of the concept vector space, and that the spectrum of the graph is a code invariant of the memory. As long as the assumptions of the model remain valid this result provides a practical method to predict and modify the evolution of the cognitive dynamics. Also, it could provide us with a way to comprehend how individual brains that map the external reality, almost surely with different particular vector representations, are nevertheless able to communicate and share a common knowledge of the world. We finish presenting adaptive association graphs, an extension of the model that makes use of the tensor product, which provides a solution to the known problem of branching in semantic nets.

  1. Eigenfunction statistics on quantum graphs

    International Nuclear Information System (INIS)

    Gnutzmann, S.; Keating, J.P.; Piotet, F.

    2010-01-01

    We investigate the spatial statistics of the energy eigenfunctions on large quantum graphs. It has previously been conjectured that these should be described by a Gaussian Random Wave Model, by analogy with quantum chaotic systems, for which such a model was proposed by Berry in 1977. The autocorrelation functions we calculate for an individual quantum graph exhibit a universal component, which completely determines a Gaussian Random Wave Model, and a system-dependent deviation. This deviation depends on the graph only through its underlying classical dynamics. Classical criteria for quantum universality to be met asymptotically in the large graph limit (i.e. for the non-universal deviation to vanish) are then extracted. We use an exact field theoretic expression in terms of a variant of a supersymmetric σ model. A saddle-point analysis of this expression leads to the estimates. In particular, intensity correlations are used to discuss the possible equidistribution of the energy eigenfunctions in the large graph limit. When equidistribution is asymptotically realized, our theory predicts a rate of convergence that is a significant refinement of previous estimates. The universal and system-dependent components of intensity correlation functions are recovered by means of an exact trace formula which we analyse in the diagonal approximation, drawing in this way a parallel between the field theory and semiclassics. Our results provide the first instance where an asymptotic Gaussian Random Wave Model has been established microscopically for eigenfunctions in a system with no disorder.

  2. The localization-delocalization matrix and the electron-density-weighted connectivity matrix of a finite graphene nanoribbon reconstructed from kernel fragments.

    Science.gov (United States)

    Timm, Matthew J; Matta, Chérif F; Massa, Lou; Huang, Lulu

    2014-11-26

    Bader's quantum theory of atoms in molecules (QTAIM) and chemical graph theory, merged in the localization-delocalization matrices (LDMs) and the electron-density-weighted connectivity matrices (EDWCM), are shown to benefit in computational speed from the kernel energy method (KEM). The LDM and EDWCM quantum chemical graph matrices of a 66-atom C46H20 hydrogen-terminated armchair graphene nanoribbon, in 14 (2×7) rings of C2v symmetry, are accurately reconstructed from kernel fragments. (This includes the full sets of electron densities at 84 bond critical points and 19 ring critical points, and the full sets of 66 localization and 4290 delocalization indices (LIs and DIs).) The average absolute deviations between KEM and directly calculated atomic electron populations, obtained from the sum of the LIs and half of the DIs of an atom, are 0.0012 ± 0.0018 e(-) (∼0.02 ± 0.03%) for carbon atoms and 0.0007 ± 0.0003 e(-) (∼0.01 ± 0.01%) for hydrogen atoms. The integration errors in the total electron population (296 electrons) are +0.0003 e(-) for the direct calculation (+0.0001%) and +0.0022 e(-) for KEM (+0.0007%). The accuracy of the KEM matrix elements is, thus, probably of the order of magnitude of the combined precision of the electronic structure calculation and the atomic integrations. KEM appears capable of delivering not only the total energies with chemical accuracy (which is well documented) but also local and nonlocal properties accurately, including the DIs between the fragments (crossing fragmentation lines). Matrices of the intact ribbon, the kernels, the KEM-reconstructed ribbon, and errors are available as Supporting Information .

  3. Exact analytical solution of the convolution integral equation for a general profile fitting function and Gaussian detector kernel

    International Nuclear Information System (INIS)

    Garcia-Vicente, F.; Rodriguez, C.

    2000-01-01

    One of the most important aspects in the metrology of radiation fields is the problem of the measurement of dose profiles in regions where the dose gradient is large. In such zones, the 'detector size effect' may produce experimental measurements that do not correspond to reality. Mathematically it can be proved, under some general assumptions of spatial linearity, that the disturbance induced in the measurement by the effect of the finite size of the detector is equal to the convolution of the real profile with a representative kernel of the detector. In this work the exact relation between the measured profile and the real profile is shown, through the analytical resolution of the integral equation for a general type of profile fitting function using Gaussian convolution kernels. (author)

  4. A Graph Library Extension of SVG

    DEFF Research Database (Denmark)

    Nørmark, Kurt

    2007-01-01

    be aggregated as a single node, and an entire graph can be embedded in a single node. In addition, a number of different graph animations are described. The starting point of the SVG extension is a library that provides an exact of mirror of SVG 1.1 in the functional programming language Scheme. Each element...

  5. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  6. The relationship between structural and functional connectivity: graph theoretical analysis of an EEG neural mass model

    NARCIS (Netherlands)

    Ponten, S.C.; Daffertshofer, A.; Hillebrand, A.; Stam, C.J.

    2010-01-01

    We investigated the relationship between structural network properties and both synchronization strength and functional characteristics in a combined neural mass and graph theoretical model of the electroencephalogram (EEG). Thirty-two neural mass models (NMMs), each representing the lump activity

  7. Chromatic graph theory

    CERN Document Server

    Chartrand, Gary; Rosen, Kenneth H

    2008-01-01

    Beginning with the origin of the four color problem in 1852, the field of graph colorings has developed into one of the most popular areas of graph theory. Introducing graph theory with a coloring theme, Chromatic Graph Theory explores connections between major topics in graph theory and graph colorings as well as emerging topics. This self-contained book first presents various fundamentals of graph theory that lie outside of graph colorings, including basic terminology and results, trees and connectivity, Eulerian and Hamiltonian graphs, matchings and factorizations, and graph embeddings. The remainder of the text deals exclusively with graph colorings. It covers vertex colorings and bounds for the chromatic number, vertex colorings of graphs embedded on surfaces, and a variety of restricted vertex colorings. The authors also describe edge colorings, monochromatic and rainbow edge colorings, complete vertex colorings, several distinguishing vertex and edge colorings, and many distance-related vertex coloring...

  8. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  9. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  10. Discrete geometric analysis of message passing algorithm on graphs

    Science.gov (United States)

    Watanabe, Yusuke

    2010-04-01

    We often encounter probability distributions given as unnormalized products of non-negative functions. The factorization structures are represented by hypergraphs called factor graphs. Such distributions appear in various fields, including statistics, artificial intelligence, statistical physics, error correcting codes, etc. Given such a distribution, computations of marginal distributions and the normalization constant are often required. However, they are computationally intractable because of their computational costs. One successful approximation method is Loopy Belief Propagation (LBP) algorithm. The focus of this thesis is an analysis of the LBP algorithm. If the factor graph is a tree, i.e. having no cycle, the algorithm gives the exact quantities. If the factor graph has cycles, however, the LBP algorithm does not give exact results and possibly exhibits oscillatory and non-convergent behaviors. The thematic question of this thesis is "How the behaviors of the LBP algorithm are affected by the discrete geometry of the factor graph?" The primary contribution of this thesis is the discovery of a formula that establishes the relation between the LBP, the Bethe free energy and the graph zeta function. This formula provides new techniques for analysis of the LBP algorithm, connecting properties of the graph and of the LBP and the Bethe free energy. We demonstrate applications of the techniques to several problems including (non) convexity of the Bethe free energy, the uniqueness and stability of the LBP fixed point. We also discuss the loop series initiated by Chertkov and Chernyak. The loop series is a subgraph expansion of the normalization constant, or partition function, and reflects the graph geometry. We investigate theoretical natures of the series. Moreover, we show a partial connection between the loop series and the graph zeta function.

  11. Graph theoretical analysis of resting magnetoencephalographic functional connectivity networks

    Directory of Open Access Journals (Sweden)

    Lindsay eRutter

    2013-07-01

    Full Text Available Complex networks have been observed to comprise small-world properties, believed to represent an optimal organization of local specialization and global integration of information processing at reduced wiring cost. Here, we applied magnitude squared coherence to resting magnetoencephalographic time series in reconstructed source space, acquired from controls and patients with schizophrenia, and generated frequency-dependent adjacency matrices modeling functional connectivity between virtual channels. After configuring undirected binary and weighted graphs, we found that all human networks demonstrated highly localized clustering and short characteristic path lengths. The most conservatively thresholded networks showed efficient wiring, with topographical distance between connected vertices amounting to one-third as observed in surrogate randomized topologies. Nodal degrees of the human networks conformed to a heavy-tailed exponentially truncated power-law, compatible with the existence of hubs, which included theta and alpha bilateral cerebellar tonsil, beta and gamma bilateral posterior cingulate, and bilateral thalamus across all frequencies. We conclude that all networks showed small-worldness, minimal physical connection distance, and skewed degree distributions characteristic of physically-embedded networks, and that these calculations derived from graph theoretical mathematics did not quantifiably distinguish between subject populations, independent of bandwidth. However, post-hoc measurements of edge computations at the scale of the individual vertex revealed trends of reduced gamma connectivity across the posterior medial parietal cortex in patients, an observation consistent with our prior resting activation study that found significant reduction of synthetic aperture magnetometry gamma power across similar regions. The basis of these small differences remains unclear.

  12. A Note on the PageRank of Undirected Graphs

    OpenAIRE

    Grolmusz, Vince

    2012-01-01

    The PageRank is a widely used scoring function of networks in general and of the World Wide Web graph in particular. The PageRank is defined for directed graphs, but in some special cases applications for undirected graphs occur. In the literature it is widely noted that the PageRank for undirected graphs are proportional to the degrees of the vertices of the graph. We prove that statement for a particular personalization vector in the definition of the PageRank, and we also show that in gene...

  13. Method for calculating anisotropic neutron transport using scattering kernel without polynomial expansion

    International Nuclear Information System (INIS)

    Takahashi, Akito; Yamamoto, Junji; Ebisuya, Mituo; Sumita, Kenji

    1979-01-01

    A new method for calculating the anisotropic neutron transport is proposed for the angular spectral analysis of D-T fusion reactor neutronics. The method is based on the transport equation with new type of anisotropic scattering kernels formulated by a single function I sub(i) (μ', μ) instead of polynomial expansion, for instance, Legendre polynomials. In the calculation of angular flux spectra by using scattering kernels with the Legendre polynomial expansion, we often observe the oscillation with negative flux. But in principle this oscillation disappears by this new method. In this work, we discussed anisotropic scattering kernels of the elastic scattering and the inelastic scatterings which excite discrete energy levels. The other scatterings were included in isotropic scattering kernels. An approximation method, with use of the first collision source written by the I sub(i) (μ', μ) function, was introduced to attenuate the ''oscillations'' when we are obliged to use the scattering kernels with the Legendre polynomial expansion. Calculated results with this approximation showed remarkable improvement for the analysis of the angular flux spectra in a slab system of lithium metal with the D-T neutron source. (author)

  14. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  15. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi

    2013-08-19

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it\\'s kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  16. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2013-01-01

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it's kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  17. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  18. Identification of Voxels Confounded by Venous Signals Using Resting-State fMRI Functional Connectivity Graph Clustering

    Directory of Open Access Journals (Sweden)

    Klaudius eKalcher

    2015-12-01

    Full Text Available Identifying venous voxels in fMRI datasets is important to increase the specificity of fMRI analyses to microvasculature in the vicinity of the neural processes triggering the BOLD response. This is, however, difficult to achieve in particular in typical studies where magnitude images of BOLD EPI are the only data available. In this study, voxelwise functional connectivity graphs were computed on minimally preprocessed low TR (333 ms multiband resting-state fMRI data, using both high positive and negative correlations to define edges between nodes (voxels. A high correlation threshold for binarization ensures that most edges in the resulting sparse graph reflect the high coherence of signals in medium to large veins. Graph clustering based on the optimization of modularity was then employed to identify clusters of coherent voxels in this graph, and all clusters of 50 or more voxels were then interpreted as corresponding to medium to large veins. Indeed, a comparison with SWI reveals that 75.6 ± 5.9% of voxels within these large clusters overlap with veins visible in the SWI image or lie outside the brain parenchyma. Some of the remainingdifferences between the two modalities can be explained by imperfect alignment or geometric distortions between the two images. Overall, the graph clustering based method for identifying venous voxels has a high specificity as well as the additional advantages of being computed in the same voxel grid as the fMRI dataset itself and not needingany additional data beyond what is usually acquired (and exported in standard fMRI experiments.

  19. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  20. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  1. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  2. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  3. Linked-cluster formulation of electron-hole interaction kernel in real-space representation without using unoccupied states.

    Science.gov (United States)

    Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam

    2018-05-21

    Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results

  4. A synthesis of empirical plant dispersal kernels

    Czech Academy of Sciences Publication Activity Database

    Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.

    2017-01-01

    Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016

  5. On Improving Convergence Rates for Nonnegative Kernel Density Estimators

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1980-01-01

    To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...

  6. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  7. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  8. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Fuzzy-based multi-kernel spherical support vector machine for ...

    Indian Academy of Sciences (India)

    A K Sampath

    2017-08-08

    Aug 8, 2017 ... design a new multi-kernel function based on the fuzzy triangular membership function. Finally .... This paper is structured as follows. Section 2 ..... analysis is compared with some existing systems based on the number of ...

  10. Graph sampling

    OpenAIRE

    Zhang, L.-C.; Patone, M.

    2017-01-01

    We synthesise the existing theory of graph sampling. We propose a formal definition of sampling in finite graphs, and provide a classification of potential graph parameters. We develop a general approach of Horvitz–Thompson estimation to T-stage snowball sampling, and present various reformulations of some common network sampling methods in the literature in terms of the outlined graph sampling theory.

  11. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  12. Kernel-Correlated Lévy Field Driven Forward Rate and Application to Derivative Pricing

    International Nuclear Information System (INIS)

    Bo Lijun; Wang Yongjin; Yang Xuewei

    2013-01-01

    We propose a term structure of forward rates driven by a kernel-correlated Lévy random field under the HJM framework. The kernel-correlated Lévy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure

  13. Characterizing graphs of maximum matching width at most 2

    DEFF Research Database (Denmark)

    Jeong, Jisu; Ok, Seongmin; Suh, Geewon

    2017-01-01

    The maximum matching width is a width-parameter that is de ned on a branch-decomposition over the vertex set of a graph. The size of a maximum matching in the bipartite graph is used as a cut-function. In this paper, we characterize the graphs of maximum matching width at most 2 using the minor o...

  14. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    Science.gov (United States)

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  15. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    Science.gov (United States)

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.

  16. Algorithms for Planar Graphs and Graphs in Metric Spaces

    DEFF Research Database (Denmark)

    Wulff-Nilsen, Christian

    structural properties that can be exploited. For instance, a road network or a wire layout on a microchip is typically (near-)planar and distances in the network are often defined w.r.t. the Euclidean or the rectilinear metric. Specialized algorithms that take advantage of such properties are often orders...... of magnitude faster than the corresponding algorithms for general graphs. The first and main part of this thesis focuses on the development of efficient planar graph algorithms. The most important contributions include a faster single-source shortest path algorithm, a distance oracle with subquadratic...... for geometric graphs and graphs embedded in metric spaces. Roughly speaking, the stretch factor is a real value expressing how well a (geo-)metric graph approximates the underlying complete graph w.r.t. distances. We give improved algorithms for computing the stretch factor of a given graph and for augmenting...

  17. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  18. Multiple graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-10-01

    Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.

  19. Moderate deviations principles for the kernel estimator of ...

    African Journals Online (AJOL)

    Abstract. The aim of this paper is to provide pointwise and uniform moderate deviations principles for the kernel estimator of a nonrandom regression function. Moreover, we give an application of these moderate deviations principles to the construction of condence regions for the regression function. Resume. L'objectif de ...

  20. Graph Theoretical Analysis of Functional Brain Networks: Test-Retest Evaluation on Short- and Long-Term Resting-State Functional MRI Data

    Science.gov (United States)

    Wang, Jin-Hui; Zuo, Xi-Nian; Gohel, Suril; Milham, Michael P.; Biswal, Bharat B.; He, Yong

    2011-01-01

    Graph-based computational network analysis has proven a powerful tool to quantitatively characterize functional architectures of the brain. However, the test-retest (TRT) reliability of graph metrics of functional networks has not been systematically examined. Here, we investigated TRT reliability of topological metrics of functional brain networks derived from resting-state functional magnetic resonance imaging data. Specifically, we evaluated both short-term (5 months apart) TRT reliability for 12 global and 6 local nodal network metrics. We found that reliability of global network metrics was overall low, threshold-sensitive and dependent on several factors of scanning time interval (TI, long-term>short-term), network membership (NM, networks excluding negative correlations>networks including negative correlations) and network type (NT, binarized networks>weighted networks). The dependence was modulated by another factor of node definition (ND) strategy. The local nodal reliability exhibited large variability across nodal metrics and a spatially heterogeneous distribution. Nodal degree was the most reliable metric and varied the least across the factors above. Hub regions in association and limbic/paralimbic cortices showed moderate TRT reliability. Importantly, nodal reliability was robust to above-mentioned four factors. Simulation analysis revealed that global network metrics were extremely sensitive (but varying degrees) to noise in functional connectivity and weighted networks generated numerically more reliable results in compared with binarized networks. For nodal network metrics, they showed high resistance to noise in functional connectivity and no NT related differences were found in the resistance. These findings provide important implications on how to choose reliable analytical schemes and network metrics of interest. PMID:21818285

  1. Graph-cut based discrete-valued image reconstruction.

    Science.gov (United States)

    Tuysuzoglu, Ahmet; Karl, W Clem; Stojanovic, Ivana; Castañòn, David; Ünlü, M Selim

    2015-05-01

    Efficient graph-cut methods have been used with great success for labeling and denoising problems occurring in computer vision. Unfortunately, the presence of linear image mappings has prevented the use of these techniques in most discrete-amplitude image reconstruction problems. In this paper, we develop a graph-cut based framework for the direct solution of discrete amplitude linear image reconstruction problems cast as regularized energy function minimizations. We first analyze the structure of discrete linear inverse problem cost functions to show that the obstacle to the application of graph-cut methods to their solution is the variable mixing caused by the presence of the linear sensing operator. We then propose to use a surrogate energy functional that overcomes the challenges imposed by the sensing operator yet can be utilized efficiently in existing graph-cut frameworks. We use this surrogate energy functional to devise a monotonic iterative algorithm for the solution of discrete valued inverse problems. We first provide experiments using local convolutional operators and show the robustness of the proposed technique to noise and stability to changes in regularization parameter. Then we focus on nonlocal, tomographic examples where we consider limited-angle data problems. We compare our technique with state-of-the-art discrete and continuous image reconstruction techniques. Experiments show that the proposed method outperforms state-of-the-art techniques in challenging scenarios involving discrete valued unknowns.

  2. A framework for multiple kernel support vector regression and its applications to siRNA efficacy prediction.

    Science.gov (United States)

    Qiu, Shibin; Lane, Terran

    2009-01-01

    The cell defense mechanism of RNA interference has applications in gene function analysis and promising potentials in human disease therapy. To effectively silence a target gene, it is desirable to select appropriate initiator siRNA molecules having satisfactory silencing capabilities. Computational prediction for silencing efficacy of siRNAs can assist this screening process before using them in biological experiments. String kernel functions, which operate directly on the string objects representing siRNAs and target mRNAs, have been applied to support vector regression for the prediction and improved accuracy over numerical kernels in multidimensional vector spaces constructed from descriptors of siRNA design rules. To fully utilize information provided by string and numerical data, we propose to unify the two in a kernel feature space by devising a multiple kernel regression framework where a linear combination of the kernels is used. We formulate the multiple kernel learning into a quadratically constrained quadratic programming (QCQP) problem, which although yields global optimal solution, is computationally demanding and requires a commercial solver package. We further propose three heuristics based on the principle of kernel-target alignment and predictive accuracy. Empirical results demonstrate that multiple kernel regression can improve accuracy, decrease model complexity by reducing the number of support vectors, and speed up computational performance dramatically. In addition, multiple kernel regression evaluates the importance of constituent kernels, which for the siRNA efficacy prediction problem, compares the relative significance of the design rules. Finally, we give insights into the multiple kernel regression mechanism and point out possible extensions.

  3. System dynamics and control with bond graph modeling

    CERN Document Server

    Kypuros, Javier

    2013-01-01

    Part I Dynamic System ModelingIntroduction to System DynamicsIntroductionSystem Decomposition and Model ComplexityMathematical Modeling of Dynamic SystemsAnalysis and Design of Dynamic SystemsControl of Dynamic SystemsDiagrams of Dynamic SystemsA Graph-Centered Approach to ModelingSummaryPracticeExercisesBasic Bond Graph ElementsIntroductionPower and Energy VariablesBasic 1-Port ElementsBasic 2-Ports ElementsJunction ElementsSimple Bond Graph ExamplesSummaryPracticeExercisesBond Graph Synthesis and Equation DerivationIntroductionGeneral GuidelinesMechanical TranslationMechanical RotationElectrical CircuitsHydraulic CircuitsMixed SystemsState Equation DerivationState-Space RepresentationsAlgebraic Loops and Derivative CausalitySummaryPracticeExercisesImpedance Bond GraphsIntroductionLaplace Transform of the State-Space EquationBasic 1-Port ImpedancesImpedance Bond Graph SynthesisJunctions, Transformers, and GyratorsEffort and Flow DividersSign ChangesTransfer Function DerivationAlternative Derivation of Transf...

  4. Laplacian eigenvectors of graphs Perron-Frobenius and Faber-Krahn type theorems

    CERN Document Server

    Biyikoğu, Türker; Stadler, Peter F

    2007-01-01

    Eigenvectors of graph Laplacians have not, to date, been the subject of expository articles and thus they may seem a surprising topic for a book. The authors propose two motivations for this new LNM volume: (1) There are fascinating subtle differences between the properties of solutions of Schrödinger equations on manifolds on the one hand, and their discrete analogs on graphs. (2) "Geometric" properties of (cost) functions defined on the vertex sets of graphs are of practical interest for heuristic optimization algorithms. The observation that the cost functions of quite a few of the well-studied combinatorial optimization problems are eigenvectors of associated graph Laplacians has prompted the investigation of such eigenvectors. The volume investigates the structure of eigenvectors and looks at the number of their sign graphs ("nodal domains"), Perron components, graphs with extremal properties with respect to eigenvectors. The Rayleigh quotient and rearrangement of graphs form the main methodology.

  5. Multiscale asymmetric orthogonal wavelet kernel for linear programming support vector learning and nonlinear dynamic systems identification.

    Science.gov (United States)

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2014-05-01

    Support vector regression for approximating nonlinear dynamic systems is more delicate than the approximation of indicator functions in support vector classification, particularly for systems that involve multitudes of time scales in their sampled data. The kernel used for support vector learning determines the class of functions from which a support vector machine can draw its solution, and the choice of kernel significantly influences the performance of a support vector machine. In this paper, to bridge the gap between wavelet multiresolution analysis and kernel learning, the closed-form orthogonal wavelet is exploited to construct new multiscale asymmetric orthogonal wavelet kernels for linear programming support vector learning. The closed-form multiscale orthogonal wavelet kernel provides a systematic framework to implement multiscale kernel learning via dyadic dilations and also enables us to represent complex nonlinear dynamics effectively. To demonstrate the superiority of the proposed multiscale wavelet kernel in identifying complex nonlinear dynamic systems, two case studies are presented that aim at building parallel models on benchmark datasets. The development of parallel models that address the long-term/mid-term prediction issue is more intricate and challenging than the identification of series-parallel models where only one-step ahead prediction is required. Simulation results illustrate the effectiveness of the proposed multiscale kernel learning.

  6. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  7. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  8. Some remarks on definability of process graphs

    NARCIS (Netherlands)

    Grabmayer, C.A.; Klop, J.W.; Luttik, B.; Baier, C.; Hermanns, H.

    2006-01-01

    We propose the notions of "density" and "connectivity" of infinite process graphs and investigate them in the context of the wellknown process algebras BPA and BPP. For a process graph G, the density function in a state s maps a natural number n to the number of states of G with distance less or

  9. Study of the convergence behavior of the complex kernel least mean square algorithm.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2013-09-01

    The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.

  10. Degree Associated Edge Reconstruction Number of Graphs with Regular Pruned Graph

    Directory of Open Access Journals (Sweden)

    P. Anusha Devi

    2015-10-01

    Full Text Available An ecard of a graph $G$ is a subgraph formed by deleting an edge. A da-ecard specifies the degree of the deleted edge along with the ecard. The degree associated edge reconstruction number of a graph $G,~dern(G,$ is the minimum number of da-ecards that uniquely determines $G.$  The adversary degree associated edge reconstruction number of a graph $G, adern(G,$ is the minimum number $k$ such that every collection of $k$ da-ecards of $G$ uniquely determines $G.$ The maximal subgraph without end vertices of a graph $G$ which is not a tree is the pruned graph of $G.$ It is shown that $dern$ of complete multipartite graphs and some connected graphs with regular pruned graph is $1$ or $2.$ We also determine $dern$ and $adern$ of corona product of standard graphs.

  11. Nonparametric evaluation of dynamic disease risk: a spatio-temporal kernel approach.

    Directory of Open Access Journals (Sweden)

    Zhijie Zhang

    Full Text Available Quantifying the distributions of disease risk in space and time jointly is a key element for understanding spatio-temporal phenomena while also having the potential to enhance our understanding of epidemiologic trajectories. However, most studies to date have neglected time dimension and focus instead on the "average" spatial pattern of disease risk, thereby masking time trajectories of disease risk. In this study we propose a new idea titled "spatio-temporal kernel density estimation (stKDE" that employs hybrid kernel (i.e., weight functions to evaluate the spatio-temporal disease risks. This approach not only can make full use of sample data but also "borrows" information in a particular manner from neighboring points both in space and time via appropriate choice of kernel functions. Monte Carlo simulations show that the proposed method performs substantially better than the traditional (i.e., frequency-based kernel density estimation (trKDE which has been used in applied settings while two illustrative examples demonstrate that the proposed approach can yield superior results compared to the popular trKDE approach. In addition, there exist various possibilities for improving and extending this method.

  12. Introduction to graph theory

    CERN Document Server

    Trudeau, Richard J

    1994-01-01

    Preface1. Pure Mathematics Introduction; Euclidean Geometry as Pure Mathematics; Games; Why Study Pure Mathematics?; What's Coming; Suggested Reading2. Graphs Introduction; Sets; Paradox; Graphs; Graph diagrams; Cautions; Common Graphs; Discovery; Complements and Subgraphs; Isomorphism; Recognizing Isomorphic Graphs; Semantics The Number of Graphs Having a Given nu; Exercises; Suggested Reading3. Planar Graphs Introduction; UG, K subscript 5, and the Jordan Curve Theorem; Are there More Nonplanar Graphs?; Expansions; Kuratowski's Theorem; Determining Whether a Graph is Planar or

  13. Supervised Kernel Optimized Locality Preserving Projection with Its Application to Face Recognition and Palm Biometrics

    Directory of Open Access Journals (Sweden)

    Chuang Lin

    2015-01-01

    Full Text Available Kernel Locality Preserving Projection (KLPP algorithm can effectively preserve the neighborhood structure of the database using the kernel trick. We have known that supervised KLPP (SKLPP can preserve within-class geometric structures by using label information. However, the conventional SKLPP algorithm endures the kernel selection which has significant impact on the performances of SKLPP. In order to overcome this limitation, a method named supervised kernel optimized LPP (SKOLPP is proposed in this paper, which can maximize the class separability in kernel learning. The proposed method maps the data from the original space to a higher dimensional kernel space using a data-dependent kernel. The adaptive parameters of the data-dependent kernel are automatically calculated through optimizing an objective function. Consequently, the nonlinear features extracted by SKOLPP have larger discriminative ability compared with SKLPP and are more adaptive to the input data. Experimental results on ORL, Yale, AR, and Palmprint databases showed the effectiveness of the proposed method.

  14. Graph Theory. 2. Vertex Descriptors and Graph Coloring

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2002-12-01

    Full Text Available This original work presents the construction of a set of ten sequence matrices and their applications for ordering vertices in graphs. For every sequence matrix three ordering criteria are applied: lexicographic ordering, based on strings of numbers, corresponding to every vertex, extracted as rows from sequence matrices; ordering by the sum of path lengths from a given vertex; and ordering by the sum of paths, starting from a given vertex. We also examine a graph that has different orderings for the above criteria. We then proceed to demonstrate that every criterion induced its own partition of graph vertex. We propose the following theoretical result: both LAVS and LVDS criteria generate identical partitioning of vertices in any graph. Finally, a coloring of graph vertices according to introduced ordering criteria was proposed.

  15. On an edge partition and root graphs of some classes of line graphs

    Directory of Open Access Journals (Sweden)

    K Pravas

    2017-04-01

    Full Text Available The Gallai and the anti-Gallai graphs of a graph $G$ are complementary pairs of spanning subgraphs of the line graph of $G$. In this paper we find some structural relations between these graph classes by finding a partition of the edge set of the line graph of a graph $G$ into the edge sets of the Gallai and anti-Gallai graphs of $G$. Based on this, an optimal algorithm to find the root graph of a line graph is obtained. Moreover, root graphs of diameter-maximal, distance-hereditary, Ptolemaic and chordal graphs are also discussed.

  16. Graphs and matrices

    CERN Document Server

    Bapat, Ravindra B

    2014-01-01

    This new edition illustrates the power of linear algebra in the study of graphs. The emphasis on matrix techniques is greater than in other texts on algebraic graph theory. Important matrices associated with graphs (for example, incidence, adjacency and Laplacian matrices) are treated in detail. Presenting a useful overview of selected topics in algebraic graph theory, early chapters of the text focus on regular graphs, algebraic connectivity, the distance matrix of a tree, and its generalized version for arbitrary graphs, known as the resistance matrix. Coverage of later topics include Laplacian eigenvalues of threshold graphs, the positive definite completion problem and matrix games based on a graph. Such an extensive coverage of the subject area provides a welcome prompt for further exploration. The inclusion of exercises enables practical learning throughout the book. In the new edition, a new chapter is added on the line graph of a tree, while some results in Chapter 6 on Perron-Frobenius theory are reo...

  17. A new analysis of the Fornberg-Whitham equation pertaining to a fractional derivative with Mittag-Leffler-type kernel

    Science.gov (United States)

    Kumar, Devendra; Singh, Jagdev; Baleanu, Dumitru

    2018-02-01

    The mathematical model of breaking of non-linear dispersive water waves with memory effect is very important in mathematical physics. In the present article, we examine a novel fractional extension of the non-linear Fornberg-Whitham equation occurring in wave breaking. We consider the most recent theory of differentiation involving the non-singular kernel based on the extended Mittag-Leffler-type function to modify the Fornberg-Whitham equation. We examine the existence of the solution of the non-linear Fornberg-Whitham equation of fractional order. Further, we show the uniqueness of the solution. We obtain the numerical solution of the new arbitrary order model of the non-linear Fornberg-Whitham equation with the aid of the Laplace decomposition technique. The numerical outcomes are displayed in the form of graphs and tables. The results indicate that the Laplace decomposition algorithm is a very user-friendly and reliable scheme for handling such type of non-linear problems of fractional order.

  18. Lung vessel segmentation in CT images using graph-cuts

    Science.gov (United States)

    Zhai, Zhiwei; Staring, Marius; Stoel, Berend C.

    2016-03-01

    Accurate lung vessel segmentation is an important operation for lung CT analysis. Filters that are based on analyzing the eigenvalues of the Hessian matrix are popular for pulmonary vessel enhancement. However, due to their low response at vessel bifurcations and vessel boundaries, extracting lung vessels by thresholding the vesselness is not sufficiently accurate. Some methods turn to graph-cuts for more accurate segmentation, as it incorporates neighbourhood information. In this work, we propose a new graph-cuts cost function combining appearance and shape, where CT intensity represents appearance and vesselness from a Hessian-based filter represents shape. Due to the amount of voxels in high resolution CT scans, the memory requirement and time consumption for building a graph structure is very high. In order to make the graph representation computationally tractable, those voxels that are considered clearly background are removed from the graph nodes, using a threshold on the vesselness map. The graph structure is then established based on the remaining voxel nodes, source/sink nodes and the neighbourhood relationship of the remaining voxels. Vessels are segmented by minimizing the energy cost function with the graph-cuts optimization framework. We optimized the parameters used in the graph-cuts cost function and evaluated the proposed method with two manually labeled sub-volumes. For independent evaluation, we used 20 CT scans of the VESSEL12 challenge. The evaluation results of the sub-volume data show that the proposed method produced a more accurate vessel segmentation compared to the previous methods, with F1 score 0.76 and 0.69. In the VESSEL12 data-set, our method obtained a competitive performance with an area under the ROC curve of 0.975, especially among the binary submissions.

  19. A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2017-02-01

    Full Text Available The kernel RX (KRX detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX detector and its parallel implementation on graphics processing units (GPUs. The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.

  20. TOWARDS FINDING A NEW KERNELIZED FUZZY C-MEANS CLUSTERING ALGORITHM

    Directory of Open Access Journals (Sweden)

    Samarjit Das

    2014-04-01

    Full Text Available Kernelized Fuzzy C-Means clustering technique is an attempt to improve the performance of the conventional Fuzzy C-Means clustering technique. Recently this technique where a kernel-induced distance function is used as a similarity measure instead of a Euclidean distance which is used in the conventional Fuzzy C-Means clustering technique, has earned popularity among research community. Like the conventional Fuzzy C-Means clustering technique this technique also suffers from inconsistency in its performance due to the fact that here also the initial centroids are obtained based on the randomly initialized membership values of the objects. Our present work proposes a new method where we have applied the Subtractive clustering technique of Chiu as a preprocessor to Kernelized Fuzzy CMeans clustering technique. With this new method we have tried not only to remove the inconsistency of Kernelized Fuzzy C-Means clustering technique but also to deal with the situations where the number of clusters is not predetermined. We have also provided a comparison of our method with the Subtractive clustering technique of Chiu and Kernelized Fuzzy C-Means clustering technique using two validity measures namely Partition Coefficient and Clustering Entropy.

  1. Graphs cospectral with a friendship graph or its complement

    Directory of Open Access Journals (Sweden)

    Alireza Abdollahi

    2013-12-01

    Full Text Available Let $n$ be any positive integer and let $F_n$ be the friendship (or Dutch windmill graph with $2n+1$ vertices and $3n$ edges. Here we study graphs with the same adjacency spectrum as the $F_n$. Two graphs are called cospectral if the eigenvalues multiset of their adjacency matrices are the same. Let $G$ be a graph cospectral with $F_n$. Here we prove that if $G$ has no cycle of length $4$ or $5$, then $Gcong F_n$. Moreover if $G$ is connected and planar then $Gcong F_n$.All but one of connected components of $G$ are isomorphic to $K_2$.The complement $overline{F_n}$ of the friendship graph is determined by its adjacency eigenvalues, that is, if $overline{F_n}$ is cospectral with a graph $H$, then $Hcong overline{F_n}$.

  2. Hierarchical graphs for rule-based modeling of biochemical systems

    Directory of Open Access Journals (Sweden)

    Hu Bin

    2011-02-01

    Full Text Available Abstract Background In rule-based modeling, graphs are used to represent molecules: a colored vertex represents a component of a molecule, a vertex attribute represents the internal state of a component, and an edge represents a bond between components. Components of a molecule share the same color. Furthermore, graph-rewriting rules are used to represent molecular interactions. A rule that specifies addition (removal of an edge represents a class of association (dissociation reactions, and a rule that specifies a change of a vertex attribute represents a class of reactions that affect the internal state of a molecular component. A set of rules comprises an executable model that can be used to determine, through various means, the system-level dynamics of molecular interactions in a biochemical system. Results For purposes of model annotation, we propose the use of hierarchical graphs to represent structural relationships among components and subcomponents of molecules. We illustrate how hierarchical graphs can be used to naturally document the structural organization of the functional components and subcomponents of two proteins: the protein tyrosine kinase Lck and the T cell receptor (TCR complex. We also show that computational methods developed for regular graphs can be applied to hierarchical graphs. In particular, we describe a generalization of Nauty, a graph isomorphism and canonical labeling algorithm. The generalized version of the Nauty procedure, which we call HNauty, can be used to assign canonical labels to hierarchical graphs or more generally to graphs with multiple edge types. The difference between the Nauty and HNauty procedures is minor, but for completeness, we provide an explanation of the entire HNauty algorithm. Conclusions Hierarchical graphs provide more intuitive formal representations of proteins and other structured molecules with multiple functional components than do the regular graphs of current languages for

  3. Graph embedding with rich information through heterogeneous graph

    KAUST Repository

    Sun, Guolei

    2017-11-12

    Graph embedding, aiming to learn low-dimensional representations for nodes in graphs, has attracted increasing attention due to its critical application including node classification, link prediction and clustering in social network analysis. Most existing algorithms for graph embedding only rely on the topology information and fail to use the copious information in nodes as well as edges. As a result, their performance for many tasks may not be satisfactory. In this thesis, we proposed a novel and general framework for graph embedding with rich text information (GERI) through constructing a heterogeneous network, in which we integrate node and edge content information with graph topology. Specially, we designed a novel biased random walk to explore the constructed heterogeneous network with the notion of flexible neighborhood. Our sampling strategy can compromise between BFS and DFS local search on heterogeneous graph. To further improve our algorithm, we proposed semi-supervised GERI (SGERI), which learns graph embedding in an discriminative manner through heterogeneous network with label information. The efficacy of our method is demonstrated by extensive comparison experiments with 9 baselines over multi-label and multi-class classification on various datasets including Citeseer, Cora, DBLP and Wiki. It shows that GERI improves the Micro-F1 and Macro-F1 of node classification up to 10%, and SGERI improves GERI by 5% in Wiki.

  4. Topics in graph theory graphs and their Cartesian product

    CERN Document Server

    Imrich, Wilfried; Rall, Douglas F

    2008-01-01

    From specialists in the field, you will learn about interesting connections and recent developments in the field of graph theory by looking in particular at Cartesian products-arguably the most important of the four standard graph products. Many new results in this area appear for the first time in print in this book. Written in an accessible way, this book can be used for personal study in advanced applications of graph theory or for an advanced graph theory course.

  5. Study of Chromatic parameters of Line, Total, Middle graphs and Graph operators of Bipartite graph

    Science.gov (United States)

    Nagarathinam, R.; Parvathi, N.

    2018-04-01

    Chromatic parameters have been explored on the basis of graph coloring process in which a couple of adjacent nodes receives different colors. But the Grundy and b-coloring executes maximum colors under certain restrictions. In this paper, Chromatic, b-chromatic and Grundy number of some graph operators of bipartite graph has been investigat

  6. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  7. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  8. Coloring geographical threshold graphs

    Energy Technology Data Exchange (ETDEWEB)

    Bradonjic, Milan [Los Alamos National Laboratory; Percus, Allon [Los Alamos National Laboratory; Muller, Tobias [EINDHOVEN UNIV. OF TECH

    2008-01-01

    We propose a coloring algorithm for sparse random graphs generated by the geographical threshold graph (GTG) model, a generalization of random geometric graphs (RGG). In a GTG, nodes are distributed in a Euclidean space, and edges are assigned according to a threshold function involving the distance between nodes as well as randomly chosen node weights. The motivation for analyzing this model is that many real networks (e.g., wireless networks, the Internet, etc.) need to be studied by using a 'richer' stochastic model (which in this case includes both a distance between nodes and weights on the nodes). Here, we analyze the GTG coloring algorithm together with the graph's clique number, showing formally that in spite of the differences in structure between GTG and RGG, the asymptotic behavior of the chromatic number is identical: {chi}1n 1n n / 1n n (1 + {omicron}(1)). Finally, we consider the leading corrections to this expression, again using the coloring algorithm and clique number to provide bounds on the chromatic number. We show that the gap between the lower and upper bound is within C 1n n / (1n 1n n){sup 2}, and specify the constant C.

  9. Further results on super graceful labeling of graphs

    Directory of Open Access Journals (Sweden)

    Gee-Choon Lau

    2016-08-01

    Full Text Available Let G=(V(G,E(G be a simple, finite and undirected graph of order p and size q. A bijection f:V(G∪E(G→{k,k+1,k+2,…,k+p+q−1} such that f(uv=|f(u−f(v| for every edge uv∈E(G is said to be a k-super graceful labeling of G. We say G is k-super graceful if it admits a k-super graceful labeling. For k=1, the function f is called a super graceful labeling and a graph is super graceful if it admits a super graceful labeling. In this paper, we study the super gracefulness of complete graph, the disjoint union of certain star graphs, the complete tripartite graphs K(1,1,n, and certain families of trees. We also present four methods of constructing new super graceful graphs. In particular, all trees of order at most 7 are super graceful. We conjecture that all trees are super graceful.

  10. Scattering kernels and cross sections working group

    International Nuclear Information System (INIS)

    Russell, G.; MacFarlane, B.; Brun, T.

    1998-01-01

    Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics

  11. Handbook of graph grammars and computing by graph transformation

    CERN Document Server

    Engels, G; Kreowski, H J; Rozenberg, G

    1999-01-01

    Graph grammars originated in the late 60s, motivated by considerations about pattern recognition and compiler construction. Since then, the list of areas which have interacted with the development of graph grammars has grown quite impressively. Besides the aforementioned areas, it includes software specification and development, VLSI layout schemes, database design, modeling of concurrent systems, massively parallel computer architectures, logic programming, computer animation, developmental biology, music composition, visual languages, and many others.The area of graph grammars and graph tran

  12. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  13. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  14. Learning a peptide-protein binding affinity predictor with kernel ridge regression

    Science.gov (United States)

    2013-01-01

    Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting

  15. Bit Error-Rate Minimizing Detector for Amplify-and-Forward Relaying Systems Using Generalized Gaussian Kernel

    KAUST Repository

    Ahmed, Qasim Zeeshan

    2013-01-01

    In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.

  16. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  17. Performance Evaluation at the Hardware Architecture Level and the Operating System Kernel Design Level.

    Science.gov (United States)

    1977-12-01

    program utilizing kernel semaphores for synchronization . The Hydra kernel instructions were sampled at random using the hardware monitor. The changes in...thatf r~i~h olvrAt- 1,o;lil armcrl han itf,. own sell of primitive func ions; and c onparinoms acrosns dif fc’rnt opt ratieg ; .emsf is riot possiblc...kcrnel dcsign level is complicated by the fact that each operating system kernel ha. its own set of primitive functions and compari!ons across

  18. GAT: a graph-theoretical analysis toolbox for analyzing between-group differences in large-scale structural and functional brain networks.

    Science.gov (United States)

    Hosseini, S M Hadi; Hoeft, Fumiko; Kesler, Shelli R

    2012-01-01

    In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT) that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI) that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC) and functional data analyses (FDA), in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL) and healthy matched Controls (CON). The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.

  19. GAT: a graph-theoretical analysis toolbox for analyzing between-group differences in large-scale structural and functional brain networks.

    Directory of Open Access Journals (Sweden)

    S M Hadi Hosseini

    Full Text Available In recent years, graph theoretical analyses of neuroimaging data have increased our understanding of the organization of large-scale structural and functional brain networks. However, tools for pipeline application of graph theory for analyzing topology of brain networks is still lacking. In this report, we describe the development of a graph-analysis toolbox (GAT that facilitates analysis and comparison of structural and functional network brain networks. GAT provides a graphical user interface (GUI that facilitates construction and analysis of brain networks, comparison of regional and global topological properties between networks, analysis of network hub and modules, and analysis of resilience of the networks to random failure and targeted attacks. Area under a curve (AUC and functional data analyses (FDA, in conjunction with permutation testing, is employed for testing the differences in network topologies; analyses that are less sensitive to the thresholding process. We demonstrated the capabilities of GAT by investigating the differences in the organization of regional gray-matter correlation networks in survivors of acute lymphoblastic leukemia (ALL and healthy matched Controls (CON. The results revealed an alteration in small-world characteristics of the brain networks in the ALL survivors; an observation that confirm our hypothesis suggesting widespread neurobiological injury in ALL survivors. Along with demonstration of the capabilities of the GAT, this is the first report of altered large-scale structural brain networks in ALL survivors.

  20. Bayesian Frequency Domain Identification of LTI Systems with OBFs kernels

    NARCIS (Netherlands)

    Darwish, M.A.H.; Lataire, J.P.G.; Tóth, R.

    2017-01-01

    Regularised Frequency Response Function (FRF) estimation based on Gaussian process regression formulated directly in the frequency-domain has been introduced recently The underlying approach largely depends on the utilised kernel function, which encodes the relevant prior knowledge on the system

  1. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  2. The circumference of the square of a connected graph

    DEFF Research Database (Denmark)

    Brandt, S.; Muttel, J.; Rautenbach, D.

    2014-01-01

    The celebrated result of Fleischner states that the square of every 2-connected graph is Hamiltonian. We investigate what happens if the graph is just connected. For every n a parts per thousand yen 3, we determine the smallest length c(n) of a longest cycle in the square of a connected graph of ...... of order n and show that c(n) is a logarithmic function in n. Furthermore, for every c a parts per thousand yen 3, we characterize the connected graphs of largest order whose square contains no cycle of length at least c....

  3. A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram.

    Science.gov (United States)

    Wu, Chung Kit; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei

    2016-05-09

    Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD) systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG) is a proven biosignal that accurately and simultaneously reflects human's biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD) using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.

  4. A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram

    Directory of Open Access Journals (Sweden)

    Chung Kit Wu

    2016-05-01

    Full Text Available Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG is a proven biosignal that accurately and simultaneously reflects human’s biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.

  5. GRAMI: Generalized Frequent Subgraph Mining in Large Graphs

    KAUST Repository

    El Saeedy, Mohammed El Sayed

    2011-07-24

    Mining frequent subgraphs is an important operation on graphs. Most existing work assumes a database of many small graphs, but modern applications, such as social networks, citation graphs or protein-protein interaction in bioinformatics, are modeled as a single large graph. Interesting interactions in such applications may be transitive (e.g., friend of a friend). Existing methods, however, search for frequent isomorphic (i.e., exact match) subgraphs and cannot discover many useful patterns. In this paper we propose GRAMI, a framework that generalizes frequent subgraph mining in a large single graph. GRAMI discovers frequent patterns. A pattern is a graph where edges are generalized to distance-constrained paths. Depending on the definition of the distance function, many instantiations of the framework are possible. Both directed and undirected graphs, as well as multiple labels per vertex, are supported. We developed an efficient implementation of the framework that models the frequency resolution phase as a constraint satisfaction problem, in order to avoid the costly enumeration of all instances of each pattern in the graph. We also implemented CGRAMI, a version that supports structural and semantic constraints; and AGRAMI, an approximate version that supports very large graphs. Our experiments on real data demonstrate that our framework is up to 3 orders of magnitude faster and discovers more interesting patterns than existing approaches.

  6. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  7. Migration of ThO2 kernels under the influence of a temperature gradient

    International Nuclear Information System (INIS)

    Smith, C.L.

    1976-11-01

    BISO coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during HTGR operation. Thorium dioxide kernel migration has been studied as a function of temperature (1300 to 1700 0 C) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile, postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period) followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions

  8. An Unusual Exponential Graph

    Science.gov (United States)

    Syed, M. Qasim; Lovatt, Ian

    2014-01-01

    This paper is an addition to the series of papers on the exponential function begun by Albert Bartlett. In particular, we ask how the graph of the exponential function y = e[superscript -t/t] would appear if y were plotted versus ln t rather than the normal practice of plotting ln y versus t. In answering this question, we find a new way to…

  9. An asymptotic expression for the eigenvalues of the normalization kernel of the resonating group method

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.; Brink, D.M.

    1976-01-01

    A generating function for the eigenvalues of the RGM Normalization Kernel is expressed in terms of the diagonal matrix elements of thw GCM Overlap Kernel. An asymptotic expression for the eigenvalues is obtained by using the Method of Steepest Descent. (Auth.)

  10. Completion of the Kernel of the Differentiation Operator

    Directory of Open Access Journals (Sweden)

    Anatoly N. Morozov

    2017-01-01

    Full Text Available When investigating piecewise polynomial approximations in spaces \\(L_p, \\; 0~<~p~<~1,\\ the author considered the spreading of k-th derivative (of the operator from Sobolev spaces \\(W_1 ^ k\\ on spaces that are, in a sense, their successors with a low index less than one. In this article, we continue the study of the properties acquired by the differentiation operator \\(\\Lambda\\ with spreading beyond the space \\(W_1^1\\ $$\\Lambda~:~W_1^1~\\mapsto~L_1,\\; \\Lambda f = f^{\\;'} $$.The study is conducted by introducing the family of spaces \\(Y_p^1, \\; 0

    function \\(f_n\\ defined on \\([x_{n-1}; x_n], \\; a~=~x_0 < x_1 < \\cdots kernel.During the spreading of the differentiation operator from the space \\( C ^ 1 \\ on the space \\( W_p ^ 1 \\ the kernel does not change. In the article, it is constructively shown that jump functions and singular functions \\(f\\ belong to all spaces \\( Y_p ^ 1 \\ and \\(\\Lambda f = 0.\\ Consequently, the space of the functions of the bounded variation \\(H_1 ^ 1 \\ is contained in each \\( Y_p ^ 1 ,\\ and the differentiation operator on \\(H_1^1\\ satisfies the relation \\(\\Lambda f = f^{\\; '}.\\Also, we come to the conclusion that every function from the added part of the kernel can be logically named singular.

  11. Growing functional modules from a seed protein via integration of protein interaction and gene expression data

    Directory of Open Access Journals (Sweden)

    Dimitrakopoulou Konstantina

    2007-10-01

    Full Text Available Abstract Background Nowadays modern biology aims at unravelling the strands of complex biological structures such as the protein-protein interaction (PPI networks. A key concept in the organization of PPI networks is the existence of dense subnetworks (functional modules in them. In recent approaches clustering algorithms were applied at these networks and the resulting subnetworks were evaluated by estimating the coverage of well-established protein complexes they contained. However, most of these algorithms elaborate on an unweighted graph structure which in turn fails to elevate those interactions that would contribute to the construction of biologically more valid and coherent functional modules. Results In the current study, we present a method that corroborates the integration of protein interaction and microarray data via the discovery of biologically valid functional modules. Initially the gene expression information is overlaid as weights onto the PPI network and the enriched PPI graph allows us to exploit its topological aspects, while simultaneously highlights enhanced functional association in specific pairs of proteins. Then we present an algorithm that unveils the functional modules of the weighted graph by expanding a kernel protein set, which originates from a given 'seed' protein used as starting-point. Conclusion The integrated data and the concept of our approach provide reliable functional modules. We give proofs based on yeast data that our method manages to give accurate results in terms both of structural coherency, as well as functional consistency.

  12. A Novel Fractional-Order PID Controller for Integrated Pressurized Water Reactor Based on Wavelet Kernel Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Yu-xin Zhao

    2014-01-01

    Full Text Available This paper presents a novel wavelet kernel neural network (WKNN with wavelet kernel function. It is applicable in online learning with adaptive parameters and is applied on parameters tuning of fractional-order PID (FOPID controller, which could handle time delay problem of the complex control system. Combining the wavelet function and the kernel function, the wavelet kernel function is adopted and validated the availability for neural network. Compared to the conservative wavelet neural network, the most innovative character of the WKNN is its rapid convergence and high precision in parameters updating process. Furthermore, the integrated pressurized water reactor (IPWR system is established by RELAP5, and a novel control strategy combining WKNN and fuzzy logic rule is proposed for shortening controlling time and utilizing the experiential knowledge sufficiently. Finally, experiment results verify that the control strategy and controller proposed have the practicability and reliability in actual complicated system.

  13. Centrosymmetric Graphs And A Lower Bound For Graph Energy Of Fullerenes

    Directory of Open Access Journals (Sweden)

    Katona Gyula Y.

    2014-11-01

    Full Text Available The energy of a molecular graph G is defined as the summation of the absolute values of the eigenvalues of adjacency matrix of a graph G. In this paper, an infinite class of fullerene graphs with 10n vertices, n ≥ 2, is considered. By proving centrosymmetricity of the adjacency matrix of these fullerene graphs, a lower bound for its energy is given. Our method is general and can be extended to other class of fullerene graphs.

  14. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  15. Graphs and Homomorphisms

    CERN Document Server

    Hell, Pavol

    2004-01-01

    This is a book about graph homomorphisms. Graph theory is now an established discipline but the study of graph homomorphisms has only recently begun to gain wide acceptance and interest. The subject gives a useful perspective in areas such as graph reconstruction, products, fractional and circular colourings, and has applications in complexity theory, artificial intelligence, telecommunication, and, most recently, statistical physics.Based on the authors' lecture notes for graduate courses, this book can be used as a textbook for a second course in graph theory at 4th year or master's level an

  16. GOGrapher: A Python library for GO graph representation and analysis.

    Science.gov (United States)

    Muller, Brian; Richards, Adam J; Jin, Bo; Lu, Xinghua

    2009-07-07

    The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve.

  17. GOGrapher: A Python library for GO graph representation and analysis

    Directory of Open Access Journals (Sweden)

    Lu Xinghua

    2009-07-01

    Full Text Available Abstract Background The Gene Ontology is the most commonly used controlled vocabulary for annotating proteins. The concepts in the ontology are organized as a directed acyclic graph, in which a node corresponds to a biological concept and a directed edge denotes the parent-child semantic relationship between a pair of terms. A large number of protein annotations further create links between proteins and their functional annotations, reflecting the contemporary knowledge about proteins and their functional relationships. This leads to a complex graph consisting of interleaved biological concepts and their associated proteins. What is needed is a simple, open source library that provides tools to not only create and view the Gene Ontology graph, but to analyze and manipulate it as well. Here we describe the development and use of GOGrapher, a Python library that can be used for the creation, analysis, manipulation, and visualization of Gene Ontology related graphs. Findings An object-oriented approach was adopted to organize the hierarchy of the graphs types and associated classes. An Application Programming Interface is provided through which different types of graphs can be pragmatically created, manipulated, and visualized. GOGrapher has been successfully utilized in multiple research projects, e.g., a graph-based multi-label text classifier for protein annotation. Conclusion The GOGrapher project provides a reusable programming library designed for the manipulation and analysis of Gene Ontology graphs. The library is freely available for the scientific community to use and improve.

  18. Complexity Analysis of Precedence Terminating Infinite Graph Rewrite Systems

    Directory of Open Access Journals (Sweden)

    Naohi Eguchi

    2015-05-01

    Full Text Available The general form of safe recursion (or ramified recurrence can be expressed by an infinite graph rewrite system including unfolding graph rewrite rules introduced by Dal Lago, Martini and Zorzi, in which the size of every normal form by innermost rewriting is polynomially bounded. Every unfolding graph rewrite rule is precedence terminating in the sense of Middeldorp, Ohsaki and Zantema. Although precedence terminating infinite rewrite systems cover all the primitive recursive functions, in this paper we consider graph rewrite systems precedence terminating with argument separation, which form a subclass of precedence terminating graph rewrite systems. We show that for any precedence terminating infinite graph rewrite system G with a specific argument separation, both the runtime complexity of G and the size of every normal form in G can be polynomially bounded. As a corollary, we obtain an alternative proof of the original result by Dal Lago et al.

  19. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  20. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  1. TISK 1.0: An easy-to-use Python implementation of the time-invariant string kernel model of spoken word recognition.

    Science.gov (United States)

    You, Heejo; Magnuson, James S

    2018-04-30

    This article describes a new Python distribution of TISK, the time-invariant string kernel model of spoken word recognition (Hannagan et al. in Frontiers in Psychology, 4, 563, 2013). TISK is an interactive-activation model similar to the TRACE model (McClelland & Elman in Cognitive Psychology, 18, 1-86, 1986), but TISK replaces most of TRACE's reduplicated, time-specific nodes with theoretically motivated time-invariant, open-diphone nodes. We discuss the utility of computational models as theory development tools, the relative merits of TISK as compared to other models, and the ways in which researchers might use this implementation to guide their own research and theory development. We describe a TISK model that includes features that facilitate in-line graphing of simulation results, integration with standard Python data formats, and graph and data export. The distribution can be downloaded from https://github.com/maglab-uconn/TISK1.0 .

  2. INTRA graphical package - INTRA-Graph 1.0

    International Nuclear Information System (INIS)

    Hofman, D.; Edlund, O.

    2001-04-01

    INTRA-Graph 1.0 has been developed at Studsvik Eco and Safety AB in the frame of the European Fusion Technology Programme for application in the safety analysis using the INTRA code. INTRA-Graph 1.0 is a graphical package producing 2-dimensional plots of results generated by the INTRA code. INTRA-Graph 1.0 has been developed by extending the Grace package source code, distributed under the terms of GNU General Public License. The changes in the Grace source files are limited to provide easy updates of the INTRA-Graph when a new version of Grace will be released. The INTRA-related functionality has been implemented in new source files. The present report describes and gives complete listing of these files. The changes in the Grace source files are also described and the listing of the changed parts of the files is presented. The report gives detailed explanations and examples of files required for installation and configuration of INTRA-Graph on the different types of Unix workstations

  3. Combination of Biorthogonal Wavelet Hybrid Kernel OCSVM with Feature Weighted Approach Based on EVA and GRA in Financial Distress Prediction

    Directory of Open Access Journals (Sweden)

    Chao Huang

    2014-01-01

    Full Text Available Financial distress prediction plays an important role in the survival of companies. In this paper, a novel biorthogonal wavelet hybrid kernel function is constructed by combining linear kernel function with biorthogonal wavelet kernel function. Besides, a new feature weighted approach is presented based on economic value added (EVA and grey relational analysis (GRA. Considering the imbalance between financially distressed companies and normal ones, the feature weighted one-class support vector machine based on biorthogonal wavelet hybrid kernel (BWH-FWOCSVM is further put forward for financial distress prediction. The empirical study with real data from the listed companies on Growth Enterprise Market (GEM in China shows that the proposed approach has good performance.

  4. On the classification of plane graphs representing structurally stable rational Newton flows

    NARCIS (Netherlands)

    Jongen, H.Th.; Jonker, P.; Twilt, F.

    1991-01-01

    We study certain plane graphs, called Newton graphs, representing a special class of dynamical systems which are closely related to Newton's iteration method for finding zeros of (rational) functions defined on the complex plane. These Newton graphs are defined in terms of nonvanishing angles

  5. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  6. On some interconnections between combinatorial optimization and extremal graph theory

    Directory of Open Access Journals (Sweden)

    Cvetković Dragoš M.

    2004-01-01

    Full Text Available The uniting feature of combinatorial optimization and extremal graph theory is that in both areas one should find extrema of a function defined in most cases on a finite set. While in combinatorial optimization the point is in developing efficient algorithms and heuristics for solving specified types of problems, the extremal graph theory deals with finding bounds for various graph invariants under some constraints and with constructing extremal graphs. We analyze by examples some interconnections and interactions of the two theories and propose some conclusions.

  7. Decomposing Oriented Graphs into Six Locally Irregular Oriented Graphs

    DEFF Research Database (Denmark)

    Bensmail, Julien; Renault, Gabriel

    2016-01-01

    An undirected graph G is locally irregular if every two of its adjacent vertices have distinct degrees. We say that G is decomposable into k locally irregular graphs if there exists a partition E1∪E2∪⋯∪Ek of the edge set E(G) such that each Ei induces a locally irregular graph. It was recently co...

  8. Formal Analysis of Functional Behaviour for Model Transformations Based on Triple Graph Grammars - Extended Version

    OpenAIRE

    Hermann, Frank; Ehrig, Hartmut; Orejas, Fernando; Ulrike, Golas

    2010-01-01

    Triple Graph Grammars (TGGs) are a well-established concept for the specification of model transformations. In previous work we have formalized and analyzed already crucial properties of model transformations like termination, correctness and completeness, but functional behaviour - especially local confluence - is missing up to now. In order to close this gap we generate forward translation rules, which extend standard forward rules by translation attributes keeping track of the elements whi...

  9. Non-heuristic reduction of the graph in graph-cut optimization

    International Nuclear Information System (INIS)

    Malgouyres, François; Lermé, Nicolas

    2012-01-01

    During the last ten years, graph cuts had a growing impact in shape optimization. In particular, they are commonly used in applications of shape optimization such as image processing, computer vision and computer graphics. Their success is due to their ability to efficiently solve (apparently) difficult shape optimization problems which typically involve the perimeter of the shape. Nevertheless, solving problems with a large number of variables remains computationally expensive and requires a high memory usage since underlying graphs sometimes involve billion of nodes and even more edges. Several strategies have been proposed in the literature to improve graph-cuts in this regards. In this paper, we give a formal statement which expresses that a simple and local test performed on every node before its construction permits to avoid the construction of useless nodes for the graphs typically encountered in image processing and vision. A useless node is such that the value of the maximum flow in the graph does not change when removing the node from the graph. Such a test therefore permits to limit the construction of the graph to a band of useful nodes surrounding the final cut.

  10. Graph embedding with rich information through heterogeneous graph

    KAUST Repository

    Sun, Guolei

    2017-01-01

    Graph embedding, aiming to learn low-dimensional representations for nodes in graphs, has attracted increasing attention due to its critical application including node classification, link prediction and clustering in social network analysis. Most

  11. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  12. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    Science.gov (United States)

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  13. Mapping quantitative trait loci for a unique 'super soft' kernel trait in soft white wheat

    Science.gov (United States)

    Wheat (Triticum sp.) kernel texture is an important factor affecting milling, flour functionality, and end-use quality. Kernel texture is normally characterized as either hard or soft, the two major classes of texture. However, further variation is typically encountered in each class. Soft wheat var...

  14. Graphing trillions of triangles.

    Science.gov (United States)

    Burkhardt, Paul

    2017-07-01

    The increasing size of Big Data is often heralded but how data are transformed and represented is also profoundly important to knowledge discovery, and this is exemplified in Big Graph analytics. Much attention has been placed on the scale of the input graph but the product of a graph algorithm can be many times larger than the input. This is true for many graph problems, such as listing all triangles in a graph. Enabling scalable graph exploration for Big Graphs requires new approaches to algorithms, architectures, and visual analytics. A brief tutorial is given to aid the argument for thoughtful representation of data in the context of graph analysis. Then a new algebraic method to reduce the arithmetic operations in counting and listing triangles in graphs is introduced. Additionally, a scalable triangle listing algorithm in the MapReduce model will be presented followed by a description of the experiments with that algorithm that led to the current largest and fastest triangle listing benchmarks to date. Finally, a method for identifying triangles in new visual graph exploration technologies is proposed.

  15. Adaptive Graph Convolutional Neural Networks

    OpenAIRE

    Li, Ruoyu; Wang, Sheng; Zhu, Feiyun; Huang, Junzhou

    2018-01-01

    Graph Convolutional Neural Networks (Graph CNNs) are generalizations of classical CNNs to handle graph data such as molecular data, point could and social networks. Current filters in graph CNNs are built for fixed and shared graph structure. However, for most real data, the graph structures varies in both size and connectivity. The paper proposes a generalized and flexible graph CNN taking data of arbitrary graph structure as input. In that way a task-driven adaptive graph is learned for eac...

  16. The Visualization and Analysis of POI Features under Network Space Supported by Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    YU Wenhao

    2015-01-01

    Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.

  17. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  18. Analysis of unbounded operators and random motion

    International Nuclear Information System (INIS)

    Jorgensen, Palle E. T.

    2009-01-01

    We study infinite weighted graphs with view to 'limits at infinity' or boundaries at infinity. Examples of such weighted graphs arise in infinite (in practice, that means 'very' large) networks of resistors or in statistical mechanics models for classical or quantum systems. However, more generally, our analysis includes reproducing kernel Hilbert spaces and associated operators on them. If X is some infinite set of vertices or nodes, in applications the essential ingredient going into the definition is a reproducing kernel Hilbert space; it measures the differences of functions on X evaluated on pairs of points in X. Moreover, the Hilbert norm-squared in H(X) will represent a suitable measure of energy. Associated unbounded operators will define a notion or dissipation, it can be a graph Laplacian or a more abstract unbounded Hermitian operator defined from the reproducing kernel Hilbert space under study. We prove that there are two closed subspaces in reproducing kernel Hilbert space H(X) that measure quantitative notions of limits at infinity in X: one generalizes finite-energy harmonic functions in H(X) and the other a deficiency index of a natural operator in H(X) associated directly with the diffusion. We establish these results in the abstract, and we offer examples and applications. Our results are related to, but different from, potential theoretic notions of 'boundaries' in more standard random walk models. Comparisons are made.

  19. High Dimensional Spectral Graph Theory and Non-backtracking Random Walks on Graphs

    Science.gov (United States)

    Kempton, Mark

    This thesis has two primary areas of focus. First we study connection graphs, which are weighted graphs in which each edge is associated with a d-dimensional rotation matrix for some fixed dimension d, in addition to a scalar weight. Second, we study non-backtracking random walks on graphs, which are random walks with the additional constraint that they cannot return to the immediately previous state at any given step. Our work in connection graphs is centered on the notion of consistency, that is, the product of rotations moving from one vertex to another is independent of the path taken, and a generalization called epsilon-consistency. We present higher dimensional versions of the combinatorial Laplacian matrix and normalized Laplacian matrix from spectral graph theory, and give results characterizing the consistency of a connection graph in terms of the spectra of these matrices. We generalize several tools from classical spectral graph theory, such as PageRank and effective resistance, to apply to connection graphs. We use these tools to give algorithms for sparsification, clustering, and noise reduction on connection graphs. In non-backtracking random walks, we address the question raised by Alon et. al. concerning how the mixing rate of a non-backtracking random walk to its stationary distribution compares to the mixing rate for an ordinary random walk. Alon et. al. address this question for regular graphs. We take a different approach, and use a generalization of Ihara's Theorem to give a new proof of Alon's result for regular graphs, and to extend the result to biregular graphs. Finally, we give a non-backtracking version of Polya's Random Walk Theorem for 2-dimensional grids.

  20. X-Graphs: Language and Algorithms for Heterogeneous Graph Streams

    Science.gov (United States)

    2017-09-01

    are widely used by academia and industry. 15. SUBJECT TERMS Data Analytics, Graph Analytics, High-Performance Computing 16. SECURITY CLASSIFICATION...form the core of the DeepDive Knowledge Construction System. 2 INTRODUCTION The goal of the X-Graphs project was to develop computational techniques...memory multicore machine. Ringo is based on Snap.py and SNAP, and uses Python . Ringo now allows the integration of Delite DSL Framework Graph

  1. On the sizes of expander graphs and minimum distances of graph codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Justesen, Jørn

    2014-01-01

    We give lower bounds for the minimum distances of graph codes based on expander graphs. The bounds depend only on the second eigenvalue of the graph and the parameters of the component codes. We also give an upper bound on the size of a degree regular graph with given second eigenvalue....

  2. New approximation of a scale space kernel on SE(3) and applications in neuroimaging

    NARCIS (Netherlands)

    Portegies, J.M.; Sanguinetti, G.R.; Meesters, S.P.L.; Duits, R.

    2015-01-01

    We provide a new, analytic kernel for scale space filtering of dMRI data. The kernel is an approximation for the Green's function of a hypo-elliptic diffusion on the 3D rigid body motion group SE(3), for fiber enhancement in dMRI. The enhancements are described by linear scale space PDEs in the

  3. Similarity Measure of Graphs

    Directory of Open Access Journals (Sweden)

    Amine Labriji

    2017-07-01

    Full Text Available The topic of identifying the similarity of graphs was considered as highly recommended research field in the Web semantic, artificial intelligence, the shape recognition and information research. One of the fundamental problems of graph databases is finding similar graphs to a graph query. Existing approaches dealing with this problem are usually based on the nodes and arcs of the two graphs, regardless of parental semantic links. For instance, a common connection is not identified as being part of the similarity of two graphs in cases like two graphs without common concepts, the measure of similarity based on the union of two graphs, or the one based on the notion of maximum common sub-graph (SCM, or the distance of edition of graphs. This leads to an inadequate situation in the context of information research. To overcome this problem, we suggest a new measure of similarity between graphs, based on the similarity measure of Wu and Palmer. We have shown that this new measure satisfies the properties of a measure of similarities and we applied this new measure on examples. The results show that our measure provides a run time with a gain of time compared to existing approaches. In addition, we compared the relevance of the similarity values obtained, it appears that this new graphs measure is advantageous and  offers a contribution to solving the problem mentioned above.

  4. A kernel-based multi-feature image representation for histopathology image classification

    International Nuclear Information System (INIS)

    Moreno J; Caicedo J Gonzalez F

    2010-01-01

    This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of latent semantic analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, support vector machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that; the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  5. A KERNEL-BASED MULTI-FEATURE IMAGE REPRESENTATION FOR HISTOPATHOLOGY IMAGE CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    J Carlos Moreno

    2010-09-01

    Full Text Available This paper presents a novel strategy for building a high-dimensional feature space to represent histopathology image contents. Histogram features, related to colors, textures and edges, are combined together in a unique image representation space using kernel functions. This feature space is further enhanced by the application of Latent Semantic Analysis, to model hidden relationships among visual patterns. All that information is included in the new image representation space. Then, Support Vector Machine classifiers are used to assign semantic labels to images. Processing and classification algorithms operate on top of kernel functions, so that, the structure of the feature space is completely controlled using similarity measures and a dual representation. The proposed approach has shown a successful performance in a classification task using a dataset with 1,502 real histopathology images in 18 different classes. The results show that our approach for histological image classification obtains an improved average performance of 20.6% when compared to a conventional classification approach based on SVM directly applied to the original kernel.

  6. The graph representation approach to topological field theory in 2 + 1 dimensions

    International Nuclear Information System (INIS)

    Martin, S.P.

    1991-02-01

    An alternative definition of topological quantum field theory in 2+1 dimensions is discussed. The fundamental objects in this approach are not gauge fields as in the usual approach, but non-local observables associated with graphs. The classical theory of graphs is defined by postulating a simple diagrammatic rule for computing the Poisson bracket of any two graphs. The theory is quantized by exhibiting a quantum deformation of the classical Poisson bracket algebra, which is realized as a commutator algebra on a Hilbert space of states. The wavefunctions in this ''graph representation'' approach are functionals on an appropriate set of graphs. This is in contrast to the usual ''connection representation'' approach in which the theory is defined in terms of a gauge field and the wavefunctions are functionals on the space of flat spatial connections modulo gauge transformations

  7. Spectra of Graphs

    NARCIS (Netherlands)

    Brouwer, A.E.; Haemers, W.H.

    2012-01-01

    This book gives an elementary treatment of the basic material about graph spectra, both for ordinary, and Laplace and Seidel spectra. The text progresses systematically, by covering standard topics before presenting some new material on trees, strongly regular graphs, two-graphs, association

  8. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  9. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    Science.gov (United States)

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  10. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  11. STRUCTURAL ANNOTATION OF EM IMAGES BY GRAPH CUT

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Hang; Auer, Manfred; Parvin, Bahram

    2009-05-08

    Biological images have the potential to reveal complex signatures that may not be amenable to morphological modeling in terms of shape, location, texture, and color. An effective analytical method is to characterize the composition of a specimen based on user-defined patterns of texture and contrast formation. However, such a simple requirement demands an improved model for stability and robustness. Here, an interactive computational model is introduced for learning patterns of interest by example. The learned patterns bound an active contour model in which the traditional gradient descent optimization is replaced by the more efficient optimization of the graph cut methods. First, the energy function is defined according to the curve evolution. Next, a graph is constructed with weighted edges on the energy function and is optimized with the graph cut algorithm. As a result, the method combines the advantages of the level set method and graph cut algorithm, i.e.,"topological" invariance and computational efficiency. The technique is extended to the multi-phase segmentation problem; the method is validated on synthetic images and then applied to specimens imaged by transmission electron microscopy(TEM).

  12. Belief propagation and loop series on planar graphs

    International Nuclear Information System (INIS)

    Chertkov, Michael; Teodorescu, Razvan; Chernyak, Vladimir Y

    2008-01-01

    We discuss a generic model of Bayesian inference with binary variables defined on edges of a planar graph. The Loop Calculus approach of Chertkov and Chernyak (2006 Phys. Rev. E 73 065102(R) [cond-mat/0601487]; 2006 J. Stat. Mech. P06009 [cond-mat/0603189]) is used to evaluate the resulting series expansion for the partition function. We show that, for planar graphs, truncating the series at single-connected loops reduces, via a map reminiscent of the Fisher transformation (Fisher 1961 Phys. Rev. 124 1664), to evaluating the partition function of the dimer-matching model on an auxiliary planar graph. Thus, the truncated series can be easily re-summed, using the Pfaffian formula of Kasteleyn (1961 Physics 27 1209). This allows us to identify a big class of computationally tractable planar models reducible to a dimer model via the Belief Propagation (gauge) transformation. The Pfaffian representation can also be extended to the full Loop Series, in which case the expansion becomes a sum of Pfaffian contributions, each associated with dimer matchings on an extension to a subgraph of the original graph. Algorithmic consequences of the Pfaffian representation, as well as relations to quantum and non-planar models, are discussed

  13. Pattern graph rewrite systems

    Directory of Open Access Journals (Sweden)

    Aleks Kissinger

    2014-03-01

    Full Text Available String diagrams are a powerful tool for reasoning about physical processes, logic circuits, tensor networks, and many other compositional structures. Dixon, Duncan and Kissinger introduced string graphs, which are a combinatoric representations of string diagrams, amenable to automated reasoning about diagrammatic theories via graph rewrite systems. In this extended abstract, we show how the power of such rewrite systems can be greatly extended by introducing pattern graphs, which provide a means of expressing infinite families of rewrite rules where certain marked subgraphs, called !-boxes ("bang boxes", on both sides of a rule can be copied any number of times or removed. After reviewing the string graph formalism, we show how string graphs can be extended to pattern graphs and how pattern graphs and pattern rewrite rules can be instantiated to concrete string graphs and rewrite rules. We then provide examples demonstrating the expressive power of pattern graphs and how they can be applied to study interacting algebraic structures that are central to categorical quantum mechanics.

  14. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  15. Minimum Information Loss Based Multi-kernel Learning for Flagellar Protein Recognition in Trypanosoma Brucei

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-12-01

    Trypanosma brucei (T. Brucei) is an important pathogen agent of African trypanosomiasis. The flagellum is an essential and multifunctional organelle of T. Brucei, thus it is very important to recognize the flagellar proteins from T. Brucei proteins for the purposes of both biological research and drug design. In this paper, we investigate computationally recognizing flagellar proteins in T. Brucei by pattern recognition methods. It is argued that an optimal decision function can be obtained as the difference of probability functions of flagella protein and the non-flagellar protein for the purpose of flagella protein recognition. We propose to learn a multi-kernel classification function to approximate this optimal decision function, by minimizing the information loss of such approximation which is measured by the Kull back-Leibler (KL) divergence. An iterative multi-kernel classifier learning algorithm is developed to minimize the KL divergence for the problem of T. Brucei flagella protein recognition, experiments show its advantage over other T. Brucei flagellar protein recognition and multi-kernel learning methods. © 2014 IEEE.

  16. The signed permutation group on Feynman graphs

    Energy Technology Data Exchange (ETDEWEB)

    Purkart, Julian, E-mail: purkart@physik.hu-berlin.de [Institute of Physics, Humboldt University, D-12489 Berlin (Germany)

    2016-08-15

    The Feynman rules assign to every graph an integral which can be written as a function of a scaling parameter L. Assuming L for the process under consideration is very small, so that contributions to the renormalization group are small, we can expand the integral and only consider the lowest orders in the scaling. The aim of this article is to determine specific combinations of graphs in a scalar quantum field theory that lead to a remarkable simplification of the first non-trivial term in the perturbation series. It will be seen that the result is independent of the renormalization scheme and the scattering angles. To achieve that goal we will utilize the parametric representation of scalar Feynman integrals as well as the Hopf algebraic structure of the Feynman graphs under consideration. Moreover, we will present a formula which reduces the effort of determining the first-order term in the perturbation series for the specific combination of graphs to a minimum.

  17. MultiAspect Graphs: Algebraic Representation and Algorithms

    Directory of Open Access Journals (Sweden)

    Klaus Wehmuth

    2016-12-01

    Full Text Available We present the algebraic representation and basic algorithms for MultiAspect Graphs (MAGs. A MAG is a structure capable of representing multilayer and time-varying networks, as well as higher-order networks, while also having the property of being isomorphic to a directed graph. In particular, we show that, as a consequence of the properties associated with the MAG structure, a MAG can be represented in matrix form. Moreover, we also show that any possible MAG function (algorithm can be obtained from this matrix-based representation. This is an important theoretical result since it paves the way for adapting well-known graph algorithms for application in MAGs. We present a set of basic MAG algorithms, constructed from well-known graph algorithms, such as degree computing, Breadth First Search (BFS, and Depth First Search (DFS. These algorithms adapted to the MAG context can be used as primitives for building other more sophisticated MAG algorithms. Therefore, such examples can be seen as guidelines on how to properly derive MAG algorithms from basic algorithms on directed graphs. We also make available Python implementations of all the algorithms presented in this paper.

  18. On middle cube graphs

    Directory of Open Access Journals (Sweden)

    C. Dalfo

    2015-10-01

    Full Text Available We study a family of graphs related to the $n$-cube. The middle cube graph of parameter k is the subgraph of $Q_{2k-1}$ induced by the set of vertices whose binary representation has either $k-1$ or $k$ number of ones. The middle cube graphs can be obtained from the well-known odd graphs by doubling their vertex set. Here we study some of the properties of the middle cube graphs in the light of the theory of distance-regular graphs. In particular, we completely determine their spectra (eigenvalues and their multiplicities, and associated eigenvectors.

  19. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  20. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    Science.gov (United States)

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  1. Graph visualization (Invited talk)

    NARCIS (Netherlands)

    Wijk, van J.J.; Kreveld, van M.J.; Speckmann, B.

    2012-01-01

    Black and white node link diagrams are the classic method to depict graphs, but these often fall short to give insight in large graphs or when attributes of nodes and edges play an important role. Graph visualization aims obtaining insight in such graphs using interactive graphical representations.

  2. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  3. Adventures in graph theory

    CERN Document Server

    Joyner, W David

    2017-01-01

    This textbook acts as a pathway to higher mathematics by seeking and illuminating the connections between graph theory and diverse fields of mathematics, such as calculus on manifolds, group theory, algebraic curves, Fourier analysis, cryptography and other areas of combinatorics. An overview of graph theory definitions and polynomial invariants for graphs prepares the reader for the subsequent dive into the applications of graph theory. To pique the reader’s interest in areas of possible exploration, recent results in mathematics appear throughout the book, accompanied with examples of related graphs, how they arise, and what their valuable uses are. The consequences of graph theory covered by the authors are complicated and far-reaching, so topics are always exhibited in a user-friendly manner with copious graphs, exercises, and Sage code for the computation of equations. Samples of the book’s source code can be found at github.com/springer-math/adventures-in-graph-theory. The text is geared towards ad...

  4. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  5. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  6. Migration of the ThO2 kernels under the influence of a temperature gradient

    International Nuclear Information System (INIS)

    Smith, C.L.

    1977-01-01

    Biso-coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during high-temperature gas-cooled reactor (HTGR) operation. Thorium dioxide kernel migration has been studied as a function of temperature (1290 to 1705 0 C) (1563 to 1978 K) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid-state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period), followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions

  7. Counting the number of Feynman graphs in QCD

    Science.gov (United States)

    Kaneko, T.

    2018-05-01

    Information about the number of Feynman graphs for a given physical process in a given field theory is especially useful for confirming the result of a Feynman graph generator used in an automatic system of perturbative calculations. A method of counting the number of Feynman graphs with weight of symmetry factor was established based on zero-dimensional field theory, and was used in scalar theories and QED. In this article this method is generalized to more complicated models by direct calculation of generating functions on a computer algebra system. This method is applied to QCD with and without counter terms, where many higher order are being calculated automatically.

  8. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  10. On the strong metric dimension of generalized butterfly graph, starbarbell graph, and {C}_{m}\\odot {P}_{n} graph

    Science.gov (United States)

    Yunia Mayasari, Ratih; Atmojo Kusmayadi, Tri

    2018-04-01

    Let G be a connected graph with vertex set V(G) and edge set E(G). For every pair of vertices u,v\\in V(G), the interval I[u, v] between u and v to be the collection of all vertices that belong to some shortest u ‑ v path. A vertex s\\in V(G) strongly resolves two vertices u and v if u belongs to a shortest v ‑ s path or v belongs to a shortest u ‑ s path. A vertex set S of G is a strong resolving set of G if every two distinct vertices of G are strongly resolved by some vertex of S. The strong metric basis of G is a strong resolving set with minimal cardinality. The strong metric dimension sdim(G) of a graph G is defined as the cardinality of strong metric basis. In this paper we determine the strong metric dimension of a generalized butterfly graph, starbarbell graph, and {C}mȯ {P}n graph. We obtain the strong metric dimension of generalized butterfly graph is sdim(BFn ) = 2n ‑ 2. The strong metric dimension of starbarbell graph is sdim(S{B}{m1,{m}2,\\ldots,{m}n})={\\sum }i=1n({m}i-1)-1. The strong metric dimension of {C}mȯ {P}n graph are sdim({C}mȯ {P}n)=2m-1 for m > 3 and n = 2, and sdim({C}mȯ {P}n)=2m-2 for m > 3 and n > 2.

  11. Brain Graph Topology Changes Associated with Anti-Epileptic Drug Use

    Science.gov (United States)

    Levin, Harvey S.; Chiang, Sharon

    2015-01-01

    Abstract Neuroimaging studies of functional connectivity using graph theory have furthered our understanding of the network structure in temporal lobe epilepsy (TLE). Brain network effects of anti-epileptic drugs could influence such studies, but have not been systematically studied. Resting-state functional MRI was analyzed in 25 patients with TLE using graph theory analysis. Patients were divided into two groups based on anti-epileptic medication use: those taking carbamazepine/oxcarbazepine (CBZ/OXC) (n=9) and those not taking CBZ/OXC (n=16) as a part of their medication regimen. The following graph topology metrics were analyzed: global efficiency, betweenness centrality (BC), clustering coefficient, and small-world index. Multiple linear regression was used to examine the association of CBZ/OXC with graph topology. The two groups did not differ from each other based on epilepsy characteristics. Use of CBZ/OXC was associated with a lower BC. Longer epilepsy duration was also associated with a lower BC. These findings can inform graph theory-based studies in patients with TLE. The changes observed are discussed in relation to the anti-epileptic mechanism of action and adverse effects of CBZ/OXC. PMID:25492633

  12. Subgraph detection using graph signals

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-03-06

    In this paper we develop statistical detection theory for graph signals. In particular, given two graphs, namely, a background graph that represents an usual activity and an alternative graph that represents some unusual activity, we are interested in answering the following question: To which of the two graphs does the observed graph signal fit the best? To begin with, we assume both the graphs are known, and derive an optimal Neyman-Pearson detector. Next, we derive a suboptimal detector for the case when the alternative graph is not known. The developed theory is illustrated with numerical experiments.

  13. Subgraph detection using graph signals

    KAUST Repository

    Chepuri, Sundeep Prabhakar; Leus, Geert

    2017-01-01

    In this paper we develop statistical detection theory for graph signals. In particular, given two graphs, namely, a background graph that represents an usual activity and an alternative graph that represents some unusual activity, we are interested in answering the following question: To which of the two graphs does the observed graph signal fit the best? To begin with, we assume both the graphs are known, and derive an optimal Neyman-Pearson detector. Next, we derive a suboptimal detector for the case when the alternative graph is not known. The developed theory is illustrated with numerical experiments.

  14. Optimization of the acceptance of prebiotic beverage made from cashew nut kernels and passion fruit juice.

    Science.gov (United States)

    Rebouças, Marina Cabral; Rodrigues, Maria do Carmo Passos; Afonso, Marcos Rodrigues Amorim

    2014-07-01

    The aim of this research was to develop a prebiotic beverage from a hydrosoluble extract of broken cashew nut kernels and passion fruit juice using response surface methodology in order to optimize acceptance of its sensory attributes. A 2(2) central composite rotatable design was used, which produced 9 formulations, which were then evaluated using different concentrations of hydrosoluble cashew nut kernel, passion fruit juice, oligofructose, and 3% sugar. The use of response surface methodology to interpret the sensory data made it possible to obtain a formulation with satisfactory acceptance which met the criteria of bifidogenic action and use of hydrosoluble cashew nut kernels by using 14% oligofructose and 33% passion fruit juice. As a result of this study, it was possible to obtain a new functional prebiotic product, which combined the nutritional and functional properties of cashew nut kernels and oligofructose with the sensory properties of passion fruit juice in a beverage with satisfactory sensory acceptance. This new product emerges as a new alternative for the industrial processing of broken cashew nut kernels, which have very low market value, enabling this sector to increase its profits. © 2014 Institute of Food Technologists®

  15. Graph spectrum

    NARCIS (Netherlands)

    Brouwer, A.E.; Haemers, W.H.; Brouwer, A.E.; Haemers, W.H.

    2012-01-01

    This chapter presents some simple results on graph spectra.We assume the reader is familiar with elementary linear algebra and graph theory. Throughout, J will denote the all-1 matrix, and 1 is the all-1 vector.

  16. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  17. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM

    Directory of Open Access Journals (Sweden)

    Ji Li

    2016-10-01

    Full Text Available A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  18. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  19. Pragmatic Graph Rewriting Modifications

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    1999-01-01

    We present new pragmatic constructs for easing programming in visual graph rewriting programming languages. The first is a modification to the rewriting process for nodes the host graph, where nodes specified as 'Once Only' in the LHS of a rewrite match at most once with a corresponding node in the host graph. This reduces the previously common use of tags to indicate the progress of matching in the graph. The second modification controls the application of LHS graphs, where those specified a...

  20. An Internal Data Non-hiding Type Real-time Kernel and its Application to the Mechatronics Controller

    Science.gov (United States)

    Yoshida, Toshio

    For the mechatronics equipment controller that controls robots and machine tools, high-speed motion control processing is essential. The software system of the controller like other embedded systems is composed of three layers software such as real-time kernel layer, middleware layer, and application software layer on the dedicated hardware. The application layer in the top layer is composed of many numbers of tasks, and application function of the system is realized by the cooperation between these tasks. In this paper we propose an internal data non-hiding type real-time kernel in which customizing the task control is possible only by change in the program code of the task side without any changes in the program code of real-time kernel. It is necessary to reduce the overhead caused by the real-time kernel task control for the speed-up of the motion control of the mechatronics equipment. For this, customizing the task control function is needed. We developed internal data non-cryptic type real-time kernel ZRK to evaluate this method, and applied to the control of the multi system automatic lathe. The effect of the speed-up of the task cooperation processing was able to be confirmed by combined task control processing on the task side program code using an internal data non-hiding type real-time kernel ZRK.

  1. Application-Specific Graph Sampling for Frequent Subgraph Mining and Community Detection

    Energy Technology Data Exchange (ETDEWEB)

    Purohit, Sumit; Choudhury, Sutanay; Holder, Lawrence B.

    2017-12-11

    Graph mining is an important data analysis methodology, but struggles as the input graph size increases. The scalability and usability challenges posed by such large graphs make it imperative to sample the input graph and reduce its size. The critical challenge in sampling is to identify the appropriate algorithm to insure the resulting analysis does not suffer heavily from the data reduction. Predicting the expected performance degradation for a given graph and sampling algorithm is also useful. In this paper, we present different sampling approaches for graph mining applications such as Frequent Subgrpah Mining (FSM), and Community Detection (CD). We explore graph metrics such as PageRank, Triangles, and Diversity to sample a graph and conclude that for heterogeneous graphs Triangles and Diversity perform better than degree based metrics. We also present two new sampling variations for targeted graph mining applications. We present empirical results to show that knowledge of the target application, along with input graph properties can be used to select the best sampling algorithm. We also conclude that performance degradation is an abrupt, rather than gradual phenomena, as the sample size decreases. We present the empirical results to show that the performance degradation follows a logistic function.

  2. Simplicial complexes of graphs

    CERN Document Server

    Jonsson, Jakob

    2008-01-01

    A graph complex is a finite family of graphs closed under deletion of edges. Graph complexes show up naturally in many different areas of mathematics, including commutative algebra, geometry, and knot theory. Identifying each graph with its edge set, one may view a graph complex as a simplicial complex and hence interpret it as a geometric object. This volume examines topological properties of graph complexes, focusing on homotopy type and homology. Many of the proofs are based on Robin Forman's discrete version of Morse theory. As a byproduct, this volume also provides a loosely defined toolbox for attacking problems in topological combinatorics via discrete Morse theory. In terms of simplicity and power, arguably the most efficient tool is Forman's divide and conquer approach via decision trees; it is successfully applied to a large number of graph and digraph complexes.

  3. Upper bound for the span of pencil graph

    Science.gov (United States)

    Parvathi, N.; Vimala Rani, A.

    2018-04-01

    An L(2,1)-Coloring or Radio Coloring or λ coloring of a graph is a function f from the vertex set V(G) to the set of all nonnegative integers such that |f(x) ‑ f(y)| ≥ 2 if d(x,y) = 1 and |f(x) ‑ f(y)| ≥ 1 if d(x,y)=2, where d(x,y) denotes the distance between x and y in G. The L(2,1)-coloring number or span number λ(G) of G is the smallest number k such that G has an L(2,1)-coloring with max{f(v) : v ∈ V(G)} = k. [2]The minimum number of colors used in L(2,1)-coloring is called the radio number rn(G) of G (Positive integer). Griggs and yeh conjectured that λ(G) ≤ Δ2 for any simple graph with maximum degree Δ>2. In this article, we consider some special graphs like, n-sunlet graph, pencil graph families and derive its upper bound of (G) and rn(G).

  4. The STAPL Parallel Graph Library

    KAUST Repository

    Harshvardhan,

    2013-01-01

    This paper describes the stapl Parallel Graph Library, a high-level framework that abstracts the user from data-distribution and parallelism details and allows them to concentrate on parallel graph algorithm development. It includes a customizable distributed graph container and a collection of commonly used parallel graph algorithms. The library introduces pGraph pViews that separate algorithm design from the container implementation. It supports three graph processing algorithmic paradigms, level-synchronous, asynchronous and coarse-grained, and provides common graph algorithms based on them. Experimental results demonstrate improved scalability in performance and data size over existing graph libraries on more than 16,000 cores and on internet-scale graphs containing over 16 billion vertices and 250 billion edges. © Springer-Verlag Berlin Heidelberg 2013.

  5. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  6. Topic Model for Graph Mining.

    Science.gov (United States)

    Xuan, Junyu; Lu, Jie; Zhang, Guangquan; Luo, Xiangfeng

    2015-12-01

    Graph mining has been a popular research area because of its numerous application scenarios. Many unstructured and structured data can be represented as graphs, such as, documents, chemical molecular structures, and images. However, an issue in relation to current research on graphs is that they cannot adequately discover the topics hidden in graph-structured data which can be beneficial for both the unsupervised learning and supervised learning of the graphs. Although topic models have proved to be very successful in discovering latent topics, the standard topic models cannot be directly applied to graph-structured data due to the "bag-of-word" assumption. In this paper, an innovative graph topic model (GTM) is proposed to address this issue, which uses Bernoulli distributions to model the edges between nodes in a graph. It can, therefore, make the edges in a graph contribute to latent topic discovery and further improve the accuracy of the supervised and unsupervised learning of graphs. The experimental results on two different types of graph datasets show that the proposed GTM outperforms the latent Dirichlet allocation on classification by using the unveiled topics of these two models to represent graphs.

  7. KNBD: A Remote Kernel Block Server for Linux

    Science.gov (United States)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  8. Modern graph theory

    CERN Document Server

    Bollobás, Béla

    1998-01-01

    The time has now come when graph theory should be part of the education of every serious student of mathematics and computer science, both for its own sake and to enhance the appreciation of mathematics as a whole. This book is an in-depth account of graph theory, written with such a student in mind; it reflects the current state of the subject and emphasizes connections with other branches of pure mathematics. The volume grew out of the author's earlier book, Graph Theory -- An Introductory Course, but its length is well over twice that of its predecessor, allowing it to reveal many exciting new developments in the subject. Recognizing that graph theory is one of several courses competing for the attention of a student, the book contains extensive descriptive passages designed to convey the flavor of the subject and to arouse interest. In addition to a modern treatment of the classical areas of graph theory such as coloring, matching, extremal theory, and algebraic graph theory, the book presents a detailed ...

  9. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.

    Science.gov (United States)

    Poon, Art F Y

    2015-09-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  10. A graph edit dictionary for correcting errors in roof topology graphs reconstructed from point clouds

    Science.gov (United States)

    Xiong, B.; Oude Elberink, S.; Vosselman, G.

    2014-07-01

    In the task of 3D building model reconstruction from point clouds we face the problem of recovering a roof topology graph in the presence of noise, small roof faces and low point densities. Errors in roof topology graphs will seriously affect the final modelling results. The aim of this research is to automatically correct these errors. We define the graph correction as a graph-to-graph problem, similar to the spelling correction problem (also called the string-to-string problem). The graph correction is more complex than string correction, as the graphs are 2D while strings are only 1D. We design a strategy based on a dictionary of graph edit operations to automatically identify and correct the errors in the input graph. For each type of error the graph edit dictionary stores a representative erroneous subgraph as well as the corrected version. As an erroneous roof topology graph may contain several errors, a heuristic search is applied to find the optimum sequence of graph edits to correct the errors one by one. The graph edit dictionary can be expanded to include entries needed to cope with errors that were previously not encountered. Experiments show that the dictionary with only fifteen entries already properly corrects one quarter of erroneous graphs in about 4500 buildings, and even half of the erroneous graphs in one test area, achieving as high as a 95% acceptance rate of the reconstructed models.

  11. A novel conductivity mechanism of highly disordered carbon systems based on an investigation of graph zeta function

    Science.gov (United States)

    Matsutani, Shigeki; Sato, Iwao

    2017-09-01

    In the previous report (Matsutani and Suzuki, 2000 [21]), by proposing the mechanism under which electric conductivity is caused by the activational hopping conduction with the Wigner surmise of the level statistics, the temperature-dependent of electronic conductivity of a highly disordered carbon system was evaluated including apparent metal-insulator transition. Since the system consists of small pieces of graphite, it was assumed that the reason why the level statistics appears is due to the behavior of the quantum chaos in each granular graphite. In this article, we revise the assumption and show another origin of the Wigner surmise, which is more natural for the carbon system based on a recent investigation of graph zeta function in graph theory. Our method can be applied to the statistical treatment of the electronic properties of the randomized molecular system in general.

  12. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  13. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  14. Reproducibility of graph metrics in fMRI networks

    Directory of Open Access Journals (Sweden)

    Qawi K Telesford

    2010-12-01

    Full Text Available The reliability of graph metrics calculated in network analysis is essential to the interpretation of complex network organization. These graph metrics are used to deduce the small-world properties in networks. In this study, we investigated the test-retest reliability of graph metrics from functional magnetic resonance imaging (fMRI data collected for two runs in 45 healthy older adults. Graph metrics were calculated on data for both runs and compared using intraclass correlation coefficient (ICC statistics and Bland-Altman (BA plots. ICC scores describe the level of absolute agreement between two measurements and provide a measure of reproducibility. For mean graph metrics, ICC scores were high for clustering coefficient (ICC=0.86, global efficiency (ICC=0.83, path length (ICC=0.79, and local efficiency (ICC=0.75; the ICC score for degree was found to be low (ICC=0.29. ICC scores were also used to generate reproducibility maps in brain space to test voxel-wise reproducibility for unsmoothed and smoothed data. Reproducibility was uniform across the brain for global efficiency and path length, but was only high in network hubs for clustering coefficient, local efficiency and degree. BA plots were used to test the measurement repeatability of all graph metrics. All graph metrics fell within the limits for repeatability. Together, these results suggest that with exception of degree, mean graph metrics are reproducible and suitable for clinical studies. Further exploration is warranted to better understand reproducibility across the brain on a voxel-wise basis.

  15. Introduction to quantum graphs

    CERN Document Server

    Berkolaiko, Gregory

    2012-01-01

    A "quantum graph" is a graph considered as a one-dimensional complex and equipped with a differential operator ("Hamiltonian"). Quantum graphs arise naturally as simplified models in mathematics, physics, chemistry, and engineering when one considers propagation of waves of various nature through a quasi-one-dimensional (e.g., "meso-" or "nano-scale") system that looks like a thin neighborhood of a graph. Works that currently would be classified as discussing quantum graphs have been appearing since at least the 1930s, and since then, quantum graphs techniques have been applied successfully in various areas of mathematical physics, mathematics in general and its applications. One can mention, for instance, dynamical systems theory, control theory, quantum chaos, Anderson localization, microelectronics, photonic crystals, physical chemistry, nano-sciences, superconductivity theory, etc. Quantum graphs present many non-trivial mathematical challenges, which makes them dear to a mathematician's heart. Work on qu...

  16. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  17. Neuro-symbolic representation learning on biological knowledge graphs

    KAUST Repository

    Alshahrani, Mona

    2017-04-21

    Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge.We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of SemanticWeb based knowledge bases in biology to use in machine learning and data analytics.https://github.com/bio-ontology-research-group/walking-rdf-and-owl.robert.hoehndorf@kaust.edu.sa.Supplementary data are available at Bioinformatics online.

  18. Graph Generator Survey

    Energy Technology Data Exchange (ETDEWEB)

    Lothian, Joshua [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Powers, Sarah S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sullivan, Blair D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baker, Matthew B. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Schrock, Jonathan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Poole, Stephen W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2013-10-01

    The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allowed the emulation of different application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report examines existing synthetic graph generator implementations in preparation for further study on the properties of their generated synthetic graphs.

  19. Loose Graph Simulations

    DEFF Research Database (Denmark)

    Mansutti, Alessio; Miculan, Marino; Peressotti, Marco

    2017-01-01

    We introduce loose graph simulations (LGS), a new notion about labelled graphs which subsumes in an intuitive and natural way subgraph isomorphism (SGI), regular language pattern matching (RLPM) and graph simulation (GS). Being a unification of all these notions, LGS allows us to express directly...... also problems which are “mixed” instances of previous ones, and hence which would not fit easily in any of them. After the definition and some examples, we show that the problem of finding loose graph simulations is NP-complete, we provide formal translation of SGI, RLPM, and GS into LGSs, and we give...

  20. An OSKit-Based Implementation of Least Privilege Separation Kernel Memory Partitioning

    National Research Council Canada - National Science Library

    Carter, Donald W

    2007-01-01

    .... This work is to build a working prototype of selected TCX kernel functionality. The prototype is constructed and based on OSKit, and restricts information flow between memory partitions and resource accesses...

  1. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  2. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  3. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  4. Exploring the brains of Baduk (Go experts: gray matter morphometry, resting-state functional connectivity, and graph theoretical analysis

    Directory of Open Access Journals (Sweden)

    Wi Hoon eJung

    2013-10-01

    Full Text Available One major characteristic of experts is intuitive judgment, which is an automatic process whereby patterns stored in memory through long-term training are recognized. Indeed, long-term training may influence brain structure and function. A recent study revealed that chess experts at rest showed differences in structure and functional connectivity (FC in the head of caudate, which is associated with rapid best next-move generation. However, less is known about the structure and function of the brains of Baduk experts compared with those of experts in other strategy games. Therefore, we performed voxel-based morphometry and FC analyses in Baduk experts to investigate structural brain differences and to clarify the influence of these differences on functional interactions. We also conducted graph theoretical analysis to explore the topological organization of whole-brain functional networks. Compared to novices, Baduk experts exhibited decreased and increased gray matter volume in the amygdala and nucleus accumbens, respectively. We also found increased FC between the amygdala and medial orbitofrontal cortex and decreased FC between the nucleus accumbens and medial prefrontal cortex. Further graph theoretical analysis revealed differences in measures of the integration of the network and in the regional nodal characteristics of various brain regions activated during Baduk. This study provides evidence for structural and functional differences as well as altered topological organization of the whole-brain functional networks in Baduk experts. Our findings also offer novel suggestions about the cognitive mechanisms behind Baduk expertise, which involves intuitive decision-making mediated by somatic marker circuitry and visuospatial processing.

  5. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  6. Autoregressive Moving Average Graph Filtering

    OpenAIRE

    Isufi, Elvin; Loukas, Andreas; Simonetto, Andrea; Leus, Geert

    2016-01-01

    One of the cornerstones of the field of signal processing on graphs are graph filters, direct analogues of classical filters, but intended for signals defined on graphs. This work brings forth new insights on the distributed graph filtering problem. We design a family of autoregressive moving average (ARMA) recursions, which (i) are able to approximate any desired graph frequency response, and (ii) give exact solutions for tasks such as graph signal denoising and interpolation. The design phi...

  7. On cyclic orthogonal double covers of circulant graphs by special infinite graphs

    Directory of Open Access Journals (Sweden)

    R. El-Shanawany

    2017-12-01

    Full Text Available In this article, a technique to construct cyclic orthogonal double covers (CODCs of regular circulant graphs by certain infinite graph classes such as complete bipartite and tripartite graphs and disjoint union of butterfly and K1,2n−10 is introduced.

  8. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  9. Quantum walks on quotient graphs

    International Nuclear Information System (INIS)

    Krovi, Hari; Brun, Todd A.

    2007-01-01

    A discrete-time quantum walk on a graph Γ is the repeated application of a unitary evolution operator to a Hilbert space corresponding to the graph. If this unitary evolution operator has an associated group of symmetries, then for certain initial states the walk will be confined to a subspace of the original Hilbert space. Symmetries of the original graph, given by its automorphism group, can be inherited by the evolution operator. We show that a quantum walk confined to the subspace corresponding to this symmetry group can be seen as a different quantum walk on a smaller quotient graph. We give an explicit construction of the quotient graph for any subgroup H of the automorphism group and illustrate it with examples. The automorphisms of the quotient graph which are inherited from the original graph are the original automorphism group modulo the subgroup H used to construct it. The quotient graph is constructed by removing the symmetries of the subgroup H from the original graph. We then analyze the behavior of hitting times on quotient graphs. Hitting time is the average time it takes a walk to reach a given final vertex from a given initial vertex. It has been shown in earlier work [Phys. Rev. A 74, 042334 (2006)] that the hitting time for certain initial states of a quantum walks can be infinite, in contrast to classical random walks. We give a condition which determines whether the quotient graph has infinite hitting times given that they exist in the original graph. We apply this condition for the examples discussed and determine which quotient graphs have infinite hitting times. All known examples of quantum walks with hitting times which are short compared to classical random walks correspond to systems with quotient graphs much smaller than the original graph; we conjecture that the existence of a small quotient graph with finite hitting times is necessary for a walk to exhibit a quantum speedup

  10. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  11. spa: Semi-Supervised Semi-Parametric Graph-Based Estimation in R

    Directory of Open Access Journals (Sweden)

    Mark Culp

    2011-04-01

    Full Text Available In this paper, we present an R package that combines feature-based (X data and graph-based (G data for prediction of the response Y . In this particular case, Y is observed for a subset of the observations (labeled and missing for the remainder (unlabeled. We examine an approach for fitting Y = Xβ + f(G where β is a coefficient vector and f is a function over the vertices of the graph. The procedure is semi-supervised in nature (trained on the labeled and unlabeled sets, requiring iterative algorithms for fitting this estimate. The package provides several key functions for fitting and evaluating an estimator of this type. The package is illustrated on a text analysis data set, where the observations are text documents (papers, the response is the category of paper (either applied or theoretical statistics, the X information is the name of the journal in which the paper resides, and the graph is a co-citation network, with each vertex an observation and each edge the number of times that the two papers cite a common paper. An application involving classification of protein location using a protein interaction graph and an application involving classification on a manifold with part of the feature data converted to a graph are also presented.

  12. Fundamentals of algebraic graph transformation

    CERN Document Server

    Ehrig, Hartmut; Prange, Ulrike; Taentzer, Gabriele

    2006-01-01

    Graphs are widely used to represent structural information in the form of objects and connections between them. Graph transformation is the rule-based manipulation of graphs, an increasingly important concept in computer science and related fields. This is the first textbook treatment of the algebraic approach to graph transformation, based on algebraic structures and category theory. Part I is an introduction to the classical case of graph and typed graph transformation. In Part II basic and advanced results are first shown for an abstract form of replacement systems, so-called adhesive high-level replacement systems based on category theory, and are then instantiated to several forms of graph and Petri net transformation systems. Part III develops typed attributed graph transformation, a technique of key relevance in the modeling of visual languages and in model transformation. Part IV contains a practical case study on model transformation and a presentation of the AGG (attributed graph grammar) tool envir...

  13. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  14. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery

    Science.gov (United States)

    Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa

    2017-09-01

    A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline

  15. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Science.gov (United States)

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  17. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  18. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  19. Kernel and wavelet density estimators on manifolds and more general metric spaces

    DEFF Research Database (Denmark)

    Cleanthous, G.; Georgiadis, Athanasios; Kerkyacharian, G.

    We consider the problem of estimating the density of observations taking values in classical or nonclassical spaces such as manifolds and more general metric spaces. Our setting is quite general but also sufficiently rich in allowing the development of smooth functional calculus with well localized...... spectral kernels, Besov regularity spaces, and wavelet type systems. Kernel and both linear and nonlinear wavelet density estimators are introduced and studied. Convergence rates for these estimators are established, which are analogous to the existing results in the classical setting of real...

  20. Graph-based modelling in engineering

    CERN Document Server

    Rysiński, Jacek

    2017-01-01

    This book presents versatile, modern and creative applications of graph theory in mechanical engineering, robotics and computer networks. Topics related to mechanical engineering include e.g. machine and mechanism science, mechatronics, robotics, gearing and transmissions, design theory and production processes. The graphs treated are simple graphs, weighted and mixed graphs, bond graphs, Petri nets, logical trees etc. The authors represent several countries in Europe and America, and their contributions show how different, elegant, useful and fruitful the utilization of graphs in modelling of engineering systems can be. .

  1. On cordial labeling of double duplication for some families of graph

    Science.gov (United States)

    Shobana, L.; Remigius Perpetua Mary, F.

    2018-04-01

    Let G (V, E) be a simple undirected graph where V,E are its vertex set and edge set respectively. Consider a labeling where f bea function from the vertices of G to {0, 1} and for each edge xy assign the label|f(x)-f(y)|. Then f is called cordial of G if the number of vertices labeled 0 and the number of vertices labeled 1 differs by at most 1 and the number of edges labeled 0 and the number of edges labeled 1 differs by at most 1. In this paper we proved the existence of cordial labeling for double duplication of path graph Pn: n≥2, cycle graph Cn: n≥3 except for n≡2 (mod 4), wheel graph Wn:n≥5 except for n≥3 (mod 4), flower graph Fn: n≥5 and bistar graph Bm,n: m,n≥2.

  2. Prediction of protein subcellular localization using support vector machine with the choice of proper kernel

    Directory of Open Access Journals (Sweden)

    Al Mehedi Hasan

    2017-07-01

    Full Text Available The prediction of subcellular locations of proteins can provide useful hints for revealing their functions as well as for understanding the mechanisms of some diseases and, finally, for developing novel drugs. As the number of newly discovered proteins has been growing exponentially, laboratory-based experiments to determine the location of an uncharacterized protein in a living cell have become both expensive and time-consuming. Consequently, to tackle these challenges, computational methods are being developed as an alternative to help biologists in selecting target proteins and designing related experiments. However, the success of protein subcellular localization prediction is still a complicated and challenging problem, particularly when query proteins may have multi-label characteristics, i.e. their simultaneous existence in more than one subcellular location, or if they move between two or more different subcellular locations as well. At this point, to get rid of this problem, several types of subcellular localization prediction methods with different levels of accuracy have been proposed. The support vector machine (SVM has been employed to provide potential solutions for problems connected with the prediction of protein subcellular localization. However, the practicability of SVM is affected by difficulties in selecting its appropriate kernel as well as in selecting the parameters of that selected kernel. The literature survey has shown that most researchers apply the radial basis function (RBF kernel to build a SVM based subcellular localization prediction system. Surprisingly, there are still many other kernel functions which have not yet been applied in the prediction of protein subcellular localization. However, the nature of this classification problem requires the application of different kernels for SVM to ensure an optimal result. From this viewpoint, this paper presents the work to apply different kernels for SVM in protein

  3. Direct Kernel Perceptron (DKP): ultra-fast kernel ELM-based classification with non-iterative closed-form weight calculation.

    Science.gov (United States)

    Fernández-Delgado, Manuel; Cernadas, Eva; Barro, Senén; Ribeiro, Jorge; Neves, José

    2014-02-01

    The Direct Kernel Perceptron (DKP) (Fernández-Delgado et al., 2010) is a very simple and fast kernel-based classifier, related to the Support Vector Machine (SVM) and to the Extreme Learning Machine (ELM) (Huang, Wang, & Lan, 2011), whose α-coefficients are calculated directly, without any iterative training, using an analytical closed-form expression which involves only the training patterns. The DKP, which is inspired by the Direct Parallel Perceptron, (Auer et al., 2008), uses a Gaussian kernel and a linear classifier (perceptron). The weight vector of this classifier in the feature space minimizes an error measure which combines the training error and the hyperplane margin, without any tunable regularization parameter. This weight vector can be translated, using a variable change, to the α-coefficients, and both are determined without iterative calculations. We calculate solutions using several error functions, achieving the best trade-off between accuracy and efficiency with the linear function. These solutions for the α coefficients can be considered alternatives to the ELM with a new physical meaning in terms of error and margin: in fact, the linear and quadratic DKP are special cases of the two-class ELM when the regularization parameter C takes the values C=0 and C=∞. The linear DKP is extremely efficient and much faster (over a vast collection of 42 benchmark and real-life data sets) than 12 very popular and accurate classifiers including SVM, Multi-Layer Perceptron, Adaboost, Random Forest and Bagging of RPART decision trees, Linear Discriminant Analysis, K-Nearest Neighbors, ELM, Probabilistic Neural Networks, Radial Basis Function neural networks and Generalized ART. Besides, despite its simplicity and extreme efficiency, DKP achieves higher accuracies than 7 out of 12 classifiers, exhibiting small differences with respect to the best ones (SVM, ELM, Adaboost and Random Forest), which are much slower. Thus, the DKP provides an easy and fast way

  4. Algebraic Graph Theory Morphisms, Monoids and Matrices

    CERN Document Server

    Knauer, Ulrich

    2011-01-01

    This is a highly self-contained book about algebraic graph theory which iswritten with a view to keep the lively and unconventional atmosphere of a spoken text to communicate the enthusiasm the author feels about this subject. The focus is on homomorphisms and endomorphisms, matrices and eigenvalues. Graph models are extremely useful for almost all applications and applicators as they play an important role as structuring tools. They allow to model net structures -like roads, computers, telephones -instances of abstract data structures -likelists, stacks, trees -and functional or object orient

  5. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  6. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. About the Big Graphs Arising when Forming the Diagnostic Models in a Reconfigurable Computing Field of Functional Monitoring and Diagnostics System of the Spacecraft Onboard Control Complex

    Directory of Open Access Journals (Sweden)

    L. V. Savkin

    2015-01-01

    Full Text Available One of the problems in implementation of the multipurpose complete systems based on the reconfigurable computing fields (RCF is the problem of optimum redistribution of logicalarithmetic resources in growing scope of functional tasks. Irrespective of complexity, all of them are transformed into an orgraph, which functional and topological structure is appropriately imposed on the RCF based, as a rule, on the field programmable gate array (FPGA.Due to limitation of the hardware configurations and functions realized by means of the switched logical blocks (SLB, the abovementioned problem becomes even more critical when there is a need, within the strictly allocated RCF fragment, to realize even more complex challenge in comparison with the problem which was solved during the previous computing step. In such cases it is possible to speak about graphs of big dimensions with respect to allocated RCF fragment.The article considers this problem through development of diagnostic algorithms to implement diagnostics and control of an onboard control complex of the spacecraft using RCF. It gives examples of big graphs arising with respect to allocated RCF fragment when forming the hardware levels of a diagnostic model, which, in this case, is any hardware-based algorithm of diagnostics in RCF.The article reviews examples of arising big graphs when forming the complicated diagnostic models due to drastic difference in formation of hardware levels on closely located RCF fragments. It also pays attention to big graphs emerging when the multichannel diagnostic models are formed.Three main ways to solve the problem of big graphs with respect to allocated RCF fragment are given. These are: splitting the graph into fragments, use of pop-up windows with relocating and memorizing intermediate values of functions of high hardware levels of diagnostic models, and deep adaptive update of diagnostic model.It is shown that the last of three ways is the most efficient

  8. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  9. BootGraph: probabilistic fiber tractography using bootstrap algorithms and graph theory.

    Science.gov (United States)

    Vorburger, Robert S; Reischauer, Carolin; Boesiger, Peter

    2013-02-01

    Bootstrap methods have recently been introduced to diffusion-weighted magnetic resonance imaging to estimate the measurement uncertainty of ensuing diffusion parameters directly from the acquired data without the necessity to assume a noise model. These methods have been previously combined with deterministic streamline tractography algorithms to allow for the assessment of connection probabilities in the human brain. Thereby, the local noise induced disturbance in the diffusion data is accumulated additively due to the incremental progression of streamline tractography algorithms. Graph based approaches have been proposed to overcome this drawback of streamline techniques. For this reason, the bootstrap method is in the present work incorporated into a graph setup to derive a new probabilistic fiber tractography method, called BootGraph. The acquired data set is thereby converted into a weighted, undirected graph by defining a vertex in each voxel and edges between adjacent vertices. By means of the cone of uncertainty, which is derived using the wild bootstrap, a weight is thereafter assigned to each edge. Two path finding algorithms are subsequently applied to derive connection probabilities. While the first algorithm is based on the shortest path approach, the second algorithm takes all existing paths between two vertices into consideration. Tracking results are compared to an established algorithm based on the bootstrap method in combination with streamline fiber tractography and to another graph based algorithm. The BootGraph shows a very good performance in crossing situations with respect to false negatives and permits incorporating additional constraints, such as a curvature threshold. By inheriting the advantages of the bootstrap method and graph theory, the BootGraph method provides a computationally efficient and flexible probabilistic tractography setup to compute connection probability maps and virtual fiber pathways without the drawbacks of

  10. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  11. QTL detection for wheat kernel size and quality and the responses of these traits to low nitrogen stress.

    Science.gov (United States)

    Cui, Fa; Fan, Xiaoli; Chen, Mei; Zhang, Na; Zhao, Chunhua; Zhang, Wei; Han, Jie; Ji, Jun; Zhao, Xueqiang; Yang, Lijuan; Zhao, Zongwu; Tong, Yiping; Wang, Tao; Li, Junming

    2016-03-01

    QTLs for kernel characteristics and tolerance to N stress were identified, and the functions of ten known genes with regard to these traits were specified. Kernel size and quality characteristics in wheat (Triticum aestivum L.) ultimately determine the end use of the grain and affect its commodity price, both of which are influenced by the application of nitrogen (N) fertilizer. This study characterized quantitative trait loci (QTLs) for kernel size and quality and examined the responses of these traits to low-N stress using a recombinant inbred line population derived from Kenong 9204 × Jing 411. Phenotypic analyses were conducted in five trials that each included low- and high-N treatments. We identified 109 putative additive QTLs for 11 kernel size and quality characteristics and 49 QTLs for tolerance to N stress, 27 and 14 of which were stable across the tested environments, respectively. These QTLs were distributed across all wheat chromosomes except for chromosomes 3A, 4D, 6D, and 7B. Eleven QTL clusters that simultaneously affected kernel size- and quality-related traits were identified. At nine locations, 25 of the 49 QTLs for N deficiency tolerance coincided with the QTLs for kernel characteristics, indicating their genetic independence. The feasibility of indirect selection of a superior genotype for kernel size and quality under high-N conditions in breeding programs designed for a lower input management system are discussed. In addition, we specified the functions of Glu-A1, Glu-B1, Glu-A3, Glu-B3, TaCwi-A1, TaSus2, TaGS2-D1, PPO-D1, Rht-B1, and Ha with regard to kernel characteristics and the sensitivities of these characteristics to N stress. This study provides useful information for the genetic improvement of wheat kernel size, quality, and resistance to N stress.

  12. Formal truncations of connected kernel equations

    International Nuclear Information System (INIS)

    Dixon, R.M.

    1977-01-01

    The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems

  13. Neuro-symbolic representation learning on biological knowledge graphs.

    Science.gov (United States)

    Alshahrani, Mona; Khan, Mohammad Asif; Maddouri, Omar; Kinjo, Akira R; Queralt-Rosinach, Núria; Hoehndorf, Robert

    2017-09-01

    Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge. Results: We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of Semantic Web based knowledge bases in biology to use in machine learning and data analytics. https://github.com/bio-ontology-research-group/walking-rdf-and-owl. robert.hoehndorf@kaust.edu.sa. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  14. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  15. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  16. Generating loop graphs via Hopf algebra in quantum field theory

    International Nuclear Information System (INIS)

    Mestre, Angela; Oeckl, Robert

    2006-01-01

    We use the Hopf algebra structure of the time-ordered algebra of field operators to generate all connected weighted Feynman graphs in a recursive and efficient manner. The algebraic representation of the graphs is such that they can be evaluated directly as contributions to the connected n-point functions. The recursion proceeds by loop order and vertex number

  17. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  18. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  19. Equipackable graphs

    DEFF Research Database (Denmark)

    Vestergaard, Preben Dahl; Hartnell, Bert L.

    2006-01-01

    There are many results dealing with the problem of decomposing a fixed graph into isomorphic subgraphs. There has also been work on characterizing graphs with the property that one can delete the edges of a number of edge disjoint copies of the subgraph and, regardless of how that is done, the gr...

  20. Khovanov homology of graph-links

    Energy Technology Data Exchange (ETDEWEB)

    Nikonov, Igor M [M. V. Lomonosov Moscow State University, Faculty of Mechanics and Mathematics, Moscow (Russian Federation)

    2012-08-31

    Graph-links arise as the intersection graphs of turning chord diagrams of links. Speaking informally, graph-links provide a combinatorial description of links up to mutations. Many link invariants can be reformulated in the language of graph-links. Khovanov homology, a well-known and useful knot invariant, is defined for graph-links in this paper (in the case of the ground field of characteristic two). Bibliography: 14 titles.

  1. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  2. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  3. Graph Design via Convex Optimization: Online and Distributed Perspectives

    Science.gov (United States)

    Meng, De

    problems in sensor networks, multi-agent coordination. Distributed optimization aims to optimize a global objective function formed by summation of coupled local functions over a graph via only local communication and computation. We developed a weighted proximal ADMM for distributed optimization using graph structure. This fully distributed, single-loop algorithm allows simultaneous updates and can be viewed as a generalization of existing algorithms. More importantly, we achieve faster convergence by jointly designing graph weights and algorithm parameters. Finally, we propose a new problem on networks called Online Network Formation Problem: starting with a base graph and a set of candidate edges, at each round of the game, player one first chooses a candidate edge and reveals it to player two, then player two decides whether to accept it; player two can only accept limited number of edges and make online decisions with the goal to achieve the best properties of the synthesized network. The network properties considered include the number of spanning trees, algebraic connectivity and total effective resistance. These network formation games arise in a variety of cooperative multiagent systems. We propose a primal-dual algorithm framework for the general online network formation game, and analyze the algorithm performance by the competitive ratio and regret.

  4. Generalized connectivity of graphs

    CERN Document Server

    Li, Xueliang

    2016-01-01

    Noteworthy results, proof techniques, open problems and conjectures in generalized (edge-) connectivity are discussed in this book. Both theoretical and practical analyses for generalized (edge-) connectivity of graphs are provided. Topics covered in this book include: generalized (edge-) connectivity of graph classes, algorithms, computational complexity, sharp bounds, Nordhaus-Gaddum-type results, maximum generalized local connectivity, extremal problems, random graphs, multigraphs, relations with the Steiner tree packing problem and generalizations of connectivity. This book enables graduate students to understand and master a segment of graph theory and combinatorial optimization. Researchers in graph theory, combinatorics, combinatorial optimization, probability, computer science, discrete algorithms, complexity analysis, network design, and the information transferring models will find this book useful in their studies.

  5. Analysis of Linux kernel as a complex network

    International Nuclear Information System (INIS)

    Gao, Yichao; Zheng, Zheng; Qin, Fangyun

    2014-01-01

    Operating system (OS) acts as an intermediary between software and hardware in computer-based systems. In this paper, we analyze the core of the typical Linux OS, Linux kernel, as a complex network to investigate its underlying design principles. It is found that the Linux Kernel Network (LKN) is a directed network and its out-degree follows an exponential distribution while the in-degree follows a power-law distribution. The correlation between topology and functions is also explored, by which we find that LKN is a highly modularized network with 12 key communities. Moreover, we investigate the robustness of LKN under random failures and intentional attacks. The result shows that the failure of the large in-degree nodes providing basic services will do more damage on the whole system. Our work may shed some light on the design of complex software systems

  6. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  7. On two energy-like invariants of line graphs and related graph operations

    Directory of Open Access Journals (Sweden)

    Xiaodan Chen

    2016-02-01

    Full Text Available Abstract For a simple graph G of order n, let μ 1 ≥ μ 2 ≥ ⋯ ≥ μ n = 0 $\\mu_{1}\\geq\\mu_{2}\\geq\\cdots\\geq\\mu_{n}=0$ be its Laplacian eigenvalues, and let q 1 ≥ q 2 ≥ ⋯ ≥ q n ≥ 0 $q_{1}\\geq q_{2}\\geq\\cdots\\geq q_{n}\\geq0$ be its signless Laplacian eigenvalues. The Laplacian-energy-like invariant and incidence energy of G are defined as, respectively, LEL ( G = ∑ i = 1 n − 1 μ i and IE ( G = ∑ i = 1 n q i . $$\\mathit{LEL}(G=\\sum_{i=1}^{n-1}\\sqrt{ \\mu_{i}} \\quad\\mbox{and}\\quad \\mathit {IE}(G=\\sum_{i=1}^{n} \\sqrt{q_{i}}. $$ In this paper, we present some new upper and lower bounds on LEL and IE of line graph, subdivision graph, para-line graph and total graph of a regular graph, some of which improve previously known results. The main tools we use here are the Cauchy-Schwarz inequality and the Ozeki inequality.

  8. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  9. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  10. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  11. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  12. Price Competition on Graphs

    OpenAIRE

    Adriaan R. Soetevent

    2010-01-01

    This paper extends Hotelling's model of price competition with quadratic transportation costs from a line to graphs. I propose an algorithm to calculate firm-level demand for any given graph, conditional on prices and firm locations. One feature of graph models of price competition is that spatial discontinuities in firm-level demand may occur. I show that the existence result of D'Aspremont et al. (1979) does not extend to simple star graphs. I conjecture that this non-existence result holds...

  13. Price Competition on Graphs

    OpenAIRE

    Pim Heijnen; Adriaan Soetevent

    2014-01-01

    This paper extends Hotelling's model of price competition with quadratic transportation costs from a line to graphs. We derive an algorithm to calculate firm-level demand for any given graph, conditional on prices and firm locations. These graph models of price competition may lead to spatial discontinuities in firm-level demand. We show that the existence result of D'Aspremont et al. (1979) does not extend to simple star graphs and conjecture that this non-existence result holds more general...

  14. Skew-adjacency matrices of graphs

    NARCIS (Netherlands)

    Cavers, M.; Cioaba, S.M.; Fallat, S.; Gregory, D.A.; Haemers, W.H.; Kirkland, S.J.; McDonald, J.J.; Tsatsomeros, M.

    2012-01-01

    The spectra of the skew-adjacency matrices of a graph are considered as a possible way to distinguish adjacency cospectral graphs. This leads to the following topics: graphs whose skew-adjacency matrices are all cospectral; relations between the matchings polynomial of a graph and the characteristic

  15. Graph theory

    CERN Document Server

    Diestel, Reinhard

    2017-01-01

    This standard textbook of modern graph theory, now in its fifth edition, combines the authority of a classic with the engaging freshness of style that is the hallmark of active mathematics. It covers the core material of the subject with concise yet reliably complete proofs, while offering glimpses of more advanced methods in each field by one or two deeper results, again with proofs given in full detail. The book can be used as a reliable text for an introductory course, as a graduate text, and for self-study. From the reviews: “This outstanding book cannot be substituted with any other book on the present textbook market. It has every chance of becoming the standard textbook for graph theory.”Acta Scientiarum Mathematiciarum “Deep, clear, wonderful. This is a serious book about the heart of graph theory. It has depth and integrity. ”Persi Diaconis & Ron Graham, SIAM Review “The book has received a very enthusiastic reception, which it amply deserves. A masterly elucidation of modern graph theo...

  16. The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice.

    Science.gov (United States)

    Liu, Jie; Huang, Juan; Guo, Huan; Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Zhang, Xuehai; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhou, Yang; Li, Xiang; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen; Li, Qing; Yan, Jianbing

    2017-10-01

    Maize ( Zea mays ) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice ( Oryza sativa ) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1 , a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis ( Arabidopsis thaliana ) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1 ). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. © 2017 American Society of Plant Biologists. All Rights Reserved.

  17. Acyclicity in edge-colored graphs

    DEFF Research Database (Denmark)

    Gutin, Gregory; Jones, Mark; Sheng, Bin

    2017-01-01

    A walk W in edge-colored graphs is called properly colored (PC) if every pair of consecutive edges in W is of different color. We introduce and study five types of PC acyclicity in edge-colored graphs such that graphs of PC acyclicity of type i is a proper superset of graphs of acyclicity of type i......+1, i=1,2,3,4. The first three types are equivalent to the absence of PC cycles, PC closed trails, and PC closed walks, respectively. While graphs of types 1, 2 and 3 can be recognized in polynomial time, the problem of recognizing graphs of type 4 is, somewhat surprisingly, NP-hard even for 2-edge-colored...... graphs (i.e., when only two colors are used). The same problem with respect to type 5 is polynomial-time solvable for all edge-colored graphs. Using the five types, we investigate the border between intractability and tractability for the problems of finding the maximum number of internally vertex...

  18. Graph Sampling for Covariance Estimation

    KAUST Repository

    Chepuri, Sundeep Prabhakar

    2017-04-25

    In this paper the focus is on subsampling as well as reconstructing the second-order statistics of signals residing on nodes of arbitrary undirected graphs. Second-order stationary graph signals may be obtained by graph filtering zero-mean white noise and they admit a well-defined power spectrum whose shape is determined by the frequency response of the graph filter. Estimating the graph power spectrum forms an important component of stationary graph signal processing and related inference tasks such as Wiener prediction or inpainting on graphs. The central result of this paper is that by sampling a significantly smaller subset of vertices and using simple least squares, we can reconstruct the second-order statistics of the graph signal from the subsampled observations, and more importantly, without any spectral priors. To this end, both a nonparametric approach as well as parametric approaches including moving average and autoregressive models for the graph power spectrum are considered. The results specialize for undirected circulant graphs in that the graph nodes leading to the best compression rates are given by the so-called minimal sparse rulers. A near-optimal greedy algorithm is developed to design the subsampling scheme for the non-parametric and the moving average models, whereas a particular subsampling scheme that allows linear estimation for the autoregressive model is proposed. Numerical experiments on synthetic as well as real datasets related to climatology and processing handwritten digits are provided to demonstrate the developed theory.

  19. Price competition on graphs

    NARCIS (Netherlands)

    Soetevent, A.R.

    2010-01-01

    This paper extends Hotelling's model of price competition with quadratic transportation costs from a line to graphs. I propose an algorithm to calculate firm-level demand for any given graph, conditional on prices and firm locations. One feature of graph models of price competition is that spatial

  20. Graphing the order of the sexes: constructing, recalling, interpreting, and putting the self in gender difference graphs.

    Science.gov (United States)

    Hegarty, Peter; Lemieux, Anthony F; McQueen, Grant

    2010-03-01

    Graphs seem to connote facts more than words or tables do. Consequently, they seem unlikely places to spot implicit sexism at work. Yet, in 6 studies (N = 741), women and men constructed (Study 1) and recalled (Study 2) gender difference graphs with men's data first, and graphed powerful groups (Study 3) and individuals (Study 4) ahead of weaker ones. Participants who interpreted graph order as evidence of author "bias" inferred that the author graphed his or her own gender group first (Study 5). Women's, but not men's, preferences to graph men first were mitigated when participants graphed a difference between themselves and an opposite-sex friend prior to graphing gender differences (Study 6). Graph production and comprehension are affected by beliefs and suppositions about the groups represented in graphs to a greater degree than cognitive models of graph comprehension or realist models of scientific thinking have yet acknowledged.