WorldWideScience

Sample records for walk-weighted subsequence kernels

  1. Recovery and subsequent characterization of polyhydroxybutyrate from Rhodococcus equi cells grown on crude palm kernel oil

    Directory of Open Access Journals (Sweden)

    Nadia Altaee

    2016-07-01

    Full Text Available The gram-positive bacterium Rhodococcus equi was isolated from fertile soil, and mineral salt media (MM and trace elements were used to provide the necessary elements for its growth and PHB production in addition to using crude palm kernel oil (CPKO 1% as the carbon source. Gas chromatography (GC demonstrated that the composition of the recovered biopolymer was homopolymer polyhydroxybutyrate (PHB. The strain of the present study has a dry biomass of 1.43 (g/l with 38% PHB, as determined by GC. The recovered PHB was characterized by NMR to study the chemical structure. In addition, DSC and TGA were used to study the thermal properties of the recovered polymer, where the melting temperature (Tm was 173 °C, the glass transition temperature (Tg was 2.79 °C, and the decomposition temperature (Td was 276 °C. Gel permeation chromatography (GPC was used to study the molecular mass of the recovered PHB in addition to comparing the results with other studies using different bacteria and substrates, where the molecular weight was 642 kDa, to enable its usage in many applications. The present study demonstrated the use of an inexpensive substrate for PHB production, i.e., using gram-positive bacteria to produce PHB polymer with characterization.

  2. How does increasing immunity change spread kernel parameters in subsequent outbreaks? – A simulation study on Bluetongue Virus

    DEFF Research Database (Denmark)

    Græsbøll, Kaare; Bødker, Rene; Enøe, Claes;

    Modelling the spatial spread of vector borne diseases, one may choose methods ranging from statistic to process oriented. One often used statistic tool is the empirical spread kernel. An empiric spread kernel fitted to outbreak data provides hints on the spread mechanisms, and may provide a good...... of such changes are: vaccinations, acquired immunity, vector density and control, meteorological variations, wind pattern, and so on. Including more and more variables leads to a more process oriented model. A full process oriented approach simulates the movement of virus between vectors and host, describing...... detailed simulation spread model. And by using empirical spread kernels from past outbreaks we have fitted some of the more uncertain parameters for this case study. A stochastic simulation model was developed for the spread of bluetongue virus. In the model hosts (cattle) and vectors (Culicoides...

  3. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... analysis, looking at the class of subsampled realised kernels and we derive the limit theory for this class of estimators. We find that subsampling is highly advantageous for estimators based on discontinuous kernels, such as the truncated kernel. For kinked kernels, such as the Bartlett kernel, we show...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  4. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  5. Nonextensive Entropic Kernels

    Science.gov (United States)

    2008-08-01

    Berg et al., 1984] has been used in a machine learning context by Cuturi and Vert [2005]. Definition 26 Let (X ,+) be a semigroup .2 A function ϕ : X...R is called pd (in the semigroup sense) if k : X × X → R, defined as k(x, y) = ϕ(x + y), is a pd kernel. Likewise, ϕ is called nd if k is a nd...kernel. Accordingly, these are called semigroup kernels. 7.3 Jensen-Shannon and Tsallis kernels The basic result that allows deriving pd kernels based on

  6. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.

  7. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2016-02-25

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  8. Regularization in kernel learning

    CERN Document Server

    Mendelson, Shahar; 10.1214/09-AOS728

    2010-01-01

    Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning scenario taking place in the corresponding reproducing kernel Hilbert space (RKHS). The main novelty in the analysis is a proof that one can use a regularization term that grows significantly slower than the standard quadratic growth in the RKHS norm.

  9. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  10. Kernel Affine Projection Algorithms

    Directory of Open Access Journals (Sweden)

    José C. Príncipe

    2008-05-01

    Full Text Available The combination of the famed kernel trick and affine projection algorithms (APAs yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS. KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS, and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  11. Kernel Affine Projection Algorithms

    Science.gov (United States)

    Liu, Weifeng; Príncipe, José C.

    2008-12-01

    The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.

  12. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  13. Kernels in circulant digraphs

    Directory of Open Access Journals (Sweden)

    R. Lakshmi

    2014-06-01

    Full Text Available A kernel $J$ of a digraph $D$ is an independent set of vertices of $D$ such that for every vertex $w,in,V(D,setminus,J$ there exists an arc from $w$ to a vertex in $J.$ In this paper, among other results, a characterization of $2$-regular circulant digraph having a kernel is obtained. This characterization is a partial solution to the following problem: Characterize circulant digraphs which have kernels; it appeared in the book {it Digraphs - theory, algorithms and applications}, Second Edition, Springer-Verlag, 2009, by J. Bang-Jensen and G. Gutin.

  14. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  15. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capability of the pr...

  16. Linux Kernel in a Nutshell

    CERN Document Server

    Kroah-Hartman, Greg

    2009-01-01

    Linux Kernel in a Nutshell covers the entire range of kernel tasks, starting with downloading the source and making sure that the kernel is in sync with the versions of the tools you need. In addition to configuration and installation steps, the book offers reference material and discussions of related topics such as control of kernel options at runtime.

  17. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  18. Mixture Density Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...

  19. Remarks on kernel Bayes' rule

    OpenAIRE

    Johno, Hisashi; Nakamoto, Kazunori; Saigo, Tatsuhiko

    2015-01-01

    Kernel Bayes' rule has been proposed as a nonparametric kernel-based method to realize Bayesian inference in reproducing kernel Hilbert spaces. However, we demonstrate both theoretically and experimentally that the prediction result by kernel Bayes' rule is in some cases unnatural. We consider that this phenomenon is in part due to the fact that the assumptions in kernel Bayes' rule do not hold in general.

  20. Linearized Kernel Dictionary Learning

    Science.gov (United States)

    Golts, Alona; Elad, Michael

    2016-06-01

    In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.

  1. Optoacoustic inversion via Volterra kernel reconstruction

    CERN Document Server

    Melchert, O; Roth, B

    2016-01-01

    In this letter we address the numeric inversion of optoacoustic signals to initial stress profiles. Therefore we put under scrutiny the optoacoustic kernel reconstruction problem in the paraxial approximation of the underlying wave-equation. We apply a Fourier-series expansion of the optoacoustic Volterra kernel and obtain the respective expansion coefficients for a given "apparative" setup by performing a gauge procedure using synthetic input data. The resulting effective kernel is subsequently used to solve the optoacoustic source reconstruction problem for general signals. We verify the validity of the proposed inversion protocol for synthetic signals and explore the feasibility of our approach to also account for the diffraction transformation of signals beyond the paraxial approximation.

  2. Higher Order Kernels and Locally Affine LDDMM Registration

    CERN Document Server

    Sommer, Stefan; Darkner, Sune; Pennec, Xavier

    2011-01-01

    To achieve sparse description that allows intuitive analysis, we aim to represent deformation with a basis containing interpretable elements, and we wish to use elements that have the description capacity to represent the deformation compactly. We accomplish this by introducing higher order kernels in the LDDMM registration framework. The kernels allow local description of affine transformations and subsequent compact description of non-translational movement and of the entire non-rigid deformation. This is obtained with a representation that contains directly interpretable information from both mathematical and modeling perspectives. We develop the mathematical construction behind the higher order kernels, we show the implications for sparse image registration and deformation description, and we provide examples of how the capacity of the kernels enables registration with a very low number of parameters. The capacity and interpretability of the kernels lead to natural modeling of articulated movement, and th...

  3. Contingent kernel density estimation.

    Directory of Open Access Journals (Sweden)

    Scott Fortmann-Roe

    Full Text Available Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.

  4. Multidimensional kernel estimation

    CERN Document Server

    Milosevic, Vukasin

    2015-01-01

    Kernel estimation is one of the non-parametric methods used for estimation of probability density function. Its first ROOT implementation, as part of RooFit package, has one major issue, its evaluation time is extremely slow making in almost unusable. The goal of this project was to create a new class (TKNDTree) which will follow the original idea of kernel estimation, greatly improve the evaluation time (using the TKTree class for storing the data and creating different user-controlled modes of evaluation) and add the interpolation option, for 2D case, with the help of the new Delaunnay2D class.

  5. for palm kernel oil extraction

    African Journals Online (AJOL)

    user

    OEE), ... designed (CRD) experimental approach with 4 factor levels and 2 replications was used to determine the effect of kernel .... palm kernels in either a continuous or batch mode ... are fed through the hopper; the screw conveys, crushes,.

  6. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  7. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...

  8. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  9. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  10. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images.

  11. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  12. Multiple Kernel Point Set Registration.

    Science.gov (United States)

    Nguyen, Thanh Minh; Wu, Q M Jonathan

    2016-06-01

    The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.

  13. Testing Monotonicity of Pricing Kernels

    OpenAIRE

    Timofeev, Roman

    2007-01-01

    In this master thesis a mechanism to test mononicity of empirical pricing kernels (EPK) is presented. By testing monotonicity of pricing kernel we can determine whether utility function is concave or not. Strictly decreasing pricing kernel corresponds to concave utility function while non-decreasing EPK means that utility function contains some non-concave regions. Risk averse behavior is usually described by concave utility function and considered to be a cornerstone of classical behavioral ...

  14. 7 CFR 51.1415 - Inedible kernels.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...

  15. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel,...

  16. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible....

  17. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored...

  18. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  19. Kernel Phase and Kernel Amplitude in Fizeau Imaging

    CERN Document Server

    Pope, Benjamin J S

    2016-01-01

    Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.

  20. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch;

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically...

  1. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  2. Representative sets and irrelevant vertices: New tools for kernelization

    CERN Document Server

    Kratsch, Stefan

    2011-01-01

    Recent work of the present authors provided a polynomial kernel for Odd Cycle Transversal by introducing matroid-based tools into kernelization. In the current work we further establish the usefulness of matroid theory to kernelization by showing applications of a result on representative sets due to Lov\\'asz (Combinatorial Surveys 1977) and Marx (TCS 2009). We give two types of applications: 1. Direct applications of the representative objects idea. In this direction, we give a polynomial kernel for Almost 2-SAT by reducing the problem to a cut problem with pairs of vertices as sinks, and subsequently reducing the set of pairs to a representative subset of bounded size. This implies polynomial kernels for several other problems, including Vertex Cover parameterized by the size of the LP gap, and the RHorn-Backdoor Deletion Set problem from practical SAT solving. We also get a polynomial kernel for Multiway Cut with deletable terminals, by producing a representative set of vertices, of bounded size, which is ...

  3. Graph kernels between point clouds

    CERN Document Server

    Bach, Francis

    2007-01-01

    Point clouds are sets of points in two or three dimensions. Most kernel methods for learning on sets of points have not yet dealt with the specific geometrical invariances and practical constraints associated with point clouds in computer vision and graphics. In this paper, we present extensions of graph kernels for point clouds, which allow to use kernel methods for such ob jects as shapes, line drawings, or any three-dimensional point clouds. In order to design rich and numerically efficient kernels with as few free parameters as possible, we use kernels between covariance matrices and their factorizations on graphical models. We derive polynomial time dynamic programming recursions and present applications to recognition of handwritten digits and Chinese characters from few training examples.

  4. Kernel Generalized Noise Clustering Algorithm

    Institute of Scientific and Technical Information of China (English)

    WU Xiao-hong; ZHOU Jian-jiang

    2007-01-01

    To deal with the nonlinear separable problem, the generalized noise clustering (GNC) algorithm is extended to a kernel generalized noise clustering (KGNC) model. Different from the fuzzy c-means (FCM) model and the GNC model which are based on Euclidean distance, the presented model is based on kernel-induced distance by using kernel method. By kernel method the input data are nonlinearly and implicitly mapped into a high-dimensional feature space, where the nonlinear pattern appears linear and the GNC algorithm is performed. It is unnecessary to calculate in high-dimensional feature space because the kernel function can do itjust in input space. The effectiveness of the proposed algorithm is verified by experiments on three data sets. It is concluded that the KGNC algorithm has better clustering accuracy than FCM and GNC in clustering data sets containing noisy data.

  5. Mitigation of aflatoxin contamination in maize kernels is related to the metabolic alternation of reactive oxygen and nitrogen species by relative humidity

    Science.gov (United States)

    Environmental factors have been shown to be linked to exacerbated infection of maize kernels by Aspergillus flavus and subsequent aflatoxin contamination. Kernel resistance to aflatoxin contamination is associated with kernel water content and relative humidity during in vitro assays examining aflat...

  6. Robotic intelligence kernel

    Science.gov (United States)

    Bruemmer, David J.

    2009-11-17

    A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.

  7. Flexible kernel memory.

    Science.gov (United States)

    Nowicki, Dimitri; Siegelmann, Hava

    2010-06-11

    This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.

  8. Flexible kernel memory.

    Directory of Open Access Journals (Sweden)

    Dimitri Nowicki

    Full Text Available This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.

  9. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  10. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...

  11. (Pre)kernel catchers for cooperative games

    NARCIS (Netherlands)

    Chang, Chih; Driessen, Theo

    1995-01-01

    The paper provides a new (pre)kernel catcher in that the relevant set always contains the (pre)kernel. This new (pre)kernel catcher gives rise to a better lower bound ɛ*** such that the kernel is included in strong ɛ-cores for all real numbers ɛ not smaller than the relevant bound ɛ***.

  12. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....

  13. Straight-chain halocarbon forming fluids for TRISO fuel kernel production - Tests with yttria-stabilized zirconia microspheres

    Science.gov (United States)

    Baker, M. P.; King, J. C.; Gorman, B. P.; Braley, J. C.

    2015-03-01

    Current methods of TRISO fuel kernel production in the United States use a sol-gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  14. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  15. Kernel model-based diagnosis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    The methods for computing the kemel consistency-based diagnoses and the kernel abductive diagnoses are only suited for the situation where part of the fault behavioral modes of the components are known. The characterization of the kernel model-based diagnosis based on the general causal theory is proposed, which can break through the limitation of the above methods when all behavioral modes of each component are known. Using this method, when observation subsets deduced logically are respectively assigned to the empty or the whole observation set, the kernel consistency-based diagnoses and the kernel abductive diagnoses can deal with all situations. The direct relationship between this diagnostic procedure and the prime implicants/implicates is proved, thus linking theoretical result with implementation.

  16. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  17. Kernel Rootkits Implement and Detection

    Institute of Scientific and Technical Information of China (English)

    LI Xianghe; ZHANG Liancheng; LI Shuo

    2006-01-01

    Rootkits, which unnoticeably reside in your computer, stealthily carry on remote control and software eavesdropping, are a great threat to network and computer security. It' time to acquaint ourselves with their implement and detection. This article pays more attention to kernel rootkits, because they are more difficult to compose and to be identified than useland rootkits. The latest technologies used to write and detect kernel rootkits, along with their advantages and disadvantages, are present in this article.

  18. Recurrent kernel machines: computing with infinite echo state networks.

    Science.gov (United States)

    Hermans, Michiel; Schrauwen, Benjamin

    2012-01-01

    Echo state networks (ESNs) are large, random recurrent neural networks with a single trained linear readout layer. Despite the untrained nature of the recurrent weights, they are capable of performing universal computations on temporal input data, which makes them interesting for both theoretical research and practical applications. The key to their success lies in the fact that the network computes a broad set of nonlinear, spatiotemporal mappings of the input data, on which linear regression or classification can easily be performed. One could consider the reservoir as a spatiotemporal kernel, in which the mapping to a high-dimensional space is computed explicitly. In this letter, we build on this idea and extend the concept of ESNs to infinite-sized recurrent neural networks, which can be considered recursive kernels that subsequently can be used to create recursive support vector machines. We present the theoretical framework, provide several practical examples of recursive kernels, and apply them to typical temporal tasks.

  19. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  20. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  1. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  2. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  3. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  4. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  5. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  6. Congruence Kernels of Orthoimplication Algebras

    Directory of Open Access Journals (Sweden)

    I. Chajda

    2007-10-01

    Full Text Available Abstracting from certain properties of the implication operation in Boolean algebras leads to so-called orthoimplication algebras. These are in a natural one-to-one correspondence with families of compatible orthomodular lattices. It is proved that congruence kernels of orthoimplication algebras are in a natural one-to-one correspondence with families of compatible p-filters on the corresponding orthomodular lattices. Finally, it is proved that the lattice of all congruence kernels of an orthoimplication algebra is relatively pseudocomplemented and a simple description of the relative pseudocomplement is given.

  7. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  8. Bergman kernel on generalized exceptional Hua domain

    Institute of Scientific and Technical Information of China (English)

    YIN; weipng(殷慰萍); ZHAO; zhengang(赵振刚)

    2002-01-01

    We have computed the Bergman kernel functions explicitly for two types of generalized exceptional Hua domains, and also studied the asymptotic behavior of the Bergman kernel function of exceptional Hua domain near boundary points, based on Appell's multivariable hypergeometric function.

  9. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  10. Random Feature Maps for Dot Product Kernels

    OpenAIRE

    Kar, Purushottam; Karnick, Harish

    2012-01-01

    Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explic...

  11. ks: Kernel Density Estimation and Kernel Discriminant Analysis for Multivariate Data in R

    Directory of Open Access Journals (Sweden)

    Tarn Duong

    2007-09-01

    Full Text Available Kernel smoothing is one of the most widely used non-parametric data smoothing techniques. We introduce a new R package ks for multivariate kernel smoothing. Currently it contains functionality for kernel density estimation and kernel discriminant analysis. It is a comprehensive package for bandwidth matrix selection, implementing a wide range of data-driven diagonal and unconstrained bandwidth selectors.

  12. On the Diamond Bessel Heat Kernel

    Directory of Open Access Journals (Sweden)

    Wanchak Satsanit

    2011-01-01

    Full Text Available We study the heat equation in n dimensional by Diamond Bessel operator. We find the solution by method of convolution and Fourier transform in distribution theory and also obtain an interesting kernel related to the spectrum and the kernel which is called Bessel heat kernel.

  13. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  14. Computations of Bergman Kernels on Hua Domains

    Institute of Scientific and Technical Information of China (English)

    殷慰萍; 王安; 赵振刚; 赵晓霞; 管冰辛

    2001-01-01

    @@The Bergman kernel function plays an important ro1e in several complex variables.There exists the Bergman kernel function on any bounded domain in Cn. But we can get the Bergman kernel functions in explicit formulas for a few types of domains only,for example:the bounded homogeneous domains and the egg domain in some cases.

  15. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Y. Zhou; N. Hu; C.J. Spanos

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The propose

  16. Accelerating the Original Profile Kernel.

    Directory of Open Access Journals (Sweden)

    Tobias Hamp

    Full Text Available One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.

  17. Adaptive wiener image restoration kernel

    Science.gov (United States)

    Yuan, Ding

    2007-06-05

    A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.

  18. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  19. Towards structural Web Services matching based on Kernel methods

    Institute of Scientific and Technical Information of China (English)

    NAN Kai; YU Jianjun; SU Hao; GUO Shengmin; ZHANG Hui; XU Ke

    2007-01-01

    This paper describes a kernel methods based Web Services matching mechanism for Web Services discovery and integration.The matching mechanism tries to exploit the latent semantics by the structure of Web Services.In this paper,Web Services are schemed by WSDL(Web Services Description Language)as tree-structured XML documents,and their matching degree is calculated by our novel algorithm designed for loosely tree matching against the traditional methods.In order to achieve the task,we bring forward the concept of path subsequence to model WSDL documents in the vector space.Then,an advanced n-spectrum kernel function is defined,so that the similarity of two WSDL documents can be drawn by implementing the kernel function in the space.Using textual similarity and n-spectrum kernel values as features of low-level and mid-level,we build up a model to estimate the functional similarity between Web Services,whose parameters are learned by a ranking-SVM.Finally,a set of experiments were designed to verify the model,and the results showed that several metrics for the retrieval of Web Services have been improved by our approach.

  20. Random Feature Maps for Dot Product Kernels

    CERN Document Server

    Kar, Purushottam

    2012-01-01

    Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional Euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high confidence.

  1. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  2. Speech Enhancement Using Kernel and Normalized Kernel Affine Projection Algorithm

    Directory of Open Access Journals (Sweden)

    Bolimera Ravi

    2013-08-01

    Full Text Available The goal of this paper is to investigate the speech signal enhancement using Kernel Affine ProjectionAlgorithm (KAPA and Normalized KAPA. The removal of background noise is very important in manyapplications like speech recognition, telephone conversations, hearing aids, forensic, etc. Kernel adaptivefilters shown good performance for removal of noise. If the evaluation of background noise is more slowlythan the speech, i.e., noise signal is more stationary than the speech, we can easily estimate the noiseduring the pauses in speech. Otherwise it is more difficult to estimate the noise which results indegradation of speech. In order to improve the quality and intelligibility of speech, unlike time andfrequency domains, we can process the signal in new domain like Reproducing Kernel Hilbert Space(RKHS for high dimensional to yield more powerful nonlinear extensions. For experiments, we have usedthe database of noisy speech corpus (NOIZEUS. From the results, we observed the removal noise in RKHShas great performance in signal to noise ratio values in comparison with conventional adaptive filters.

  3. Straight-chain halocarbon forming fluids for TRISO fuel kernel production – Tests with yttria-stabilized zirconia microspheres

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M.P. [Nuclear Science and Engineering Program, Metallurgical and Materials Engineering Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Nuclear Science and Engineering Program, Metallurgical and Materials Engineering Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Metallurgical and Materials Engineering Department, Colorado Center for Advanced Ceramics, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Braley, J.C. [Nuclear Science and Engineering Program, Chemistry and Geochemistry Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States)

    2015-03-15

    Highlights: • YSZ TRISO kernels formed in three alternative, non-hazardous forming fluids. • Kernels characterized for size, shape, pore/grain size, density, and composition. • Bromotetradecane is suitable for further investigation with uranium-based precursor. - Abstract: Current methods of TRISO fuel kernel production in the United States use a sol–gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  4. Nonlinear Deep Kernel Learning for Image Annotation.

    Science.gov (United States)

    Jiu, Mingyuan; Sahbi, Hichem

    2017-02-08

    Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.

  5. kLog: A Language for Logical and Relational Learning with Kernels

    CERN Document Server

    Frasconi, Paolo; De Raedt, Luc; De Grave, Kurt

    2012-01-01

    kLog is a logical and relational language for kernel-based learning. It allows users to specify logical and relational learning problems at a high level in a declarative way. It builds on simple but powerful concepts: learning from interpretations, entity/relationship data modeling, logic programming and deductive databases (Prolog and Datalog), and graph kernels. kLog is a statistical relational learning system but unlike other statistical relational learning models, it does not represent a probability distribution directly. It is rather a kernel-based approach to learning that employs features derived from a grounded entity/relationship diagram. These features are derived using a novel technique called graphicalization: first, relational representations are transformed into graph based representations; subsequently, graph kernels are employed for defining feature spaces. kLog can use numerical and symbolic data, background knowledge in the form of Prolog or Datalog programs (as in inductive logic programmin...

  6. Nonlinear projection trick in kernel methods: an alternative to the kernel trick.

    Science.gov (United States)

    Kwak, Nojun

    2013-12-01

    In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.

  7. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  8. Filters, reproducing kernel, and adaptive meshfree method

    Science.gov (United States)

    You, Y.; Chen, J.-S.; Lu, H.

    Reproducing kernel, with its intrinsic feature of moving averaging, can be utilized as a low-pass filter with scale decomposition capability. The discrete convolution of two nth order reproducing kernels with arbitrary support size in each kernel results in a filtered reproducing kernel function that has the same reproducing order. This property is utilized to separate the numerical solution into an unfiltered lower order portion and a filtered higher order portion. As such, the corresponding high-pass filter of this reproducing kernel filter can be used to identify the locations of high gradient, and consequently serves as an operator for error indication in meshfree analysis. In conjunction with the naturally conforming property of the reproducing kernel approximation, a meshfree adaptivity method is also proposed.

  9. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  10. Tame Kernels of Pure Cubic Fields

    Institute of Scientific and Technical Information of China (English)

    Xiao Yun CHENG

    2012-01-01

    In this paper,we study the p-rank of the tame kernels of pure cubic fields.In particular,we prove that for a fixed positive integer m,there exist infinitely many pure cubic fields whose 3-rank of the tame kernel equal to m.As an application,we determine the 3-rank of their tame kernels for some special pure cubic fields.

  11. Kernel Factor Analysis Algorithm with Varimax

    Institute of Scientific and Technical Information of China (English)

    Xia Guoen; Jin Weidong; Zhang Gexiang

    2006-01-01

    Kernal factor analysis (KFA) with varimax was proposed by using Mercer kernel function which can map the data in the original space to a high-dimensional feature space, and was compared with the kernel principle component analysis (KPCA). The results show that the best error rate in handwritten digit recognition by kernel factor analysis with varimax (4.2%) was superior to KPCA (4.4%). The KFA with varimax could more accurately image handwritten digit recognition.

  12. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  13. Efficient classification for additive kernel SVMs.

    Science.gov (United States)

    Maji, Subhransu; Berg, Alexander C; Malik, Jitendra

    2013-01-01

    We show that a class of nonlinear kernel SVMs admits approximate classifiers with runtime and memory complexity that is independent of the number of support vectors. This class of kernels, which we refer to as additive kernels, includes widely used kernels for histogram-based image comparison like intersection and chi-squared kernels. Additive kernel SVMs can offer significant improvements in accuracy over linear SVMs on a wide variety of tasks while having the same runtime, making them practical for large-scale recognition or real-time detection tasks. We present experiments on a variety of datasets, including the INRIA person, Daimler-Chrysler pedestrians, UIUC Cars, Caltech-101, MNIST, and USPS digits, to demonstrate the effectiveness of our method for efficient evaluation of SVMs with additive kernels. Since its introduction, our method has become integral to various state-of-the-art systems for PASCAL VOC object detection/image classification, ImageNet Challenge, TRECVID, etc. The techniques we propose can also be applied to settings where evaluation of weighted additive kernels is required, which include kernelized versions of PCA, LDA, regression, k-means, as well as speeding up the inner loop of SVM classifier training algorithms.

  14. Molecular hydrodynamics from memory kernels

    CERN Document Server

    Lesnicki, Dominika; Carof, Antoine; Rotenberg, Benjamin

    2016-01-01

    The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as $t^{-3/2}$. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, at odds with incompressible hydrodynamics predictions. We finally discuss the various contributions to the friction, the associated time scales and the cross-over between the molecular and hydrodynamic regimes upon increasing the solute radius.

  15. Hilbertian kernels and spline functions

    CERN Document Server

    Atteia, M

    1992-01-01

    In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.

  16. Matching Subsequences in Trees

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li

    2009-01-01

    Given two rooted, labeled trees P and T the tree path subsequence problem is to determine which paths in P are subsequences of which paths in T. Here a path begins at the root and ends at a leaf. In this paper we propose this problem as a useful query primitive for XML data, and provide new...

  17. Differentiable Kernels in Generalized Matrix Learning Vector Quantization

    NARCIS (Netherlands)

    Kästner, M.; Nebel, D.; Riedel, M.; Biehl, M.; Villmann, T.

    2013-01-01

    In the present paper we investigate the application of differentiable kernel for generalized matrix learning vector quantization as an alternative kernel-based classifier, which additionally provides classification dependent data visualization. We show that the concept of differentiable kernels allo

  18. Kernel current source density method.

    Science.gov (United States)

    Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel

    2012-02-01

    Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.

  19. Filtering algorithms using shiftable kernels

    CERN Document Server

    Chaudhury, Kunal Narayan

    2011-01-01

    It was recently demonstrated in [4][arxiv:1105.4204] that the non-linear bilateral filter \\cite{Tomasi} can be efficiently implemented using an O(1) or constant-time algorithm. At the heart of this algorithm was the idea of approximating the Gaussian range kernel of the bilateral filter using trigonometric functions. In this letter, we explain how the idea in [4] can be extended to few other linear and non-linear filters [18,21,2]. While some of these filters have received a lot of attention in recent years, they are known to be computationally intensive. To extend the idea in \\cite{Chaudhury2011}, we identify a central property of trigonometric functions, called shiftability, that allows us to exploit the redundancy inherent in the filtering operations. In particular, using shiftable kernels, we show how certain complex filtering can be reduced to simply that of computing the moving sum of a stack of images. Each image in the stack is obtained through an elementary pointwise transform of the input image. Thi...

  20. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...

  1. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  2. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  3. Generalized Derivative Based Kernelized Learning Vector Quantization

    NARCIS (Netherlands)

    Schleif, Frank-Michael; Villmann, Thomas; Hammer, Barbara; Schneider, Petra; Biehl, Michael; Fyfe, Colin; Tino, Peter; Charles, Darryl; Garcia-Osoro, Cesar; Yin, Hujun

    2010-01-01

    We derive a novel derivative based version of kernelized Generalized Learning Vector Quantization (KGLVQ) as an effective, easy to interpret, prototype based and kernelized classifier. It is called D-KGLVQ and we provide generalization error bounds, experimental results on real world data, showing t

  4. PALM KERNEL SHELL AS AGGREGATE FOR LIGHT

    African Journals Online (AJOL)

    of cement, sand, gravel andpalm kernel shells respectively gave the highest compressive strength of ... Keywords: Aggregate, Cement, Concrete, Sand, Palm Kernel Shell. ... delivered to the jOb Slte in a plastic ... structures, breakwaters, piers and docks .... related to cement content at a .... sheet and the summary is shown.

  5. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  6. Kernel Model Applied in Kernel Direct Discriminant Analysis for the Recognition of Face with Nonlinear Variations

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A kernel-based discriminant analysis method called kernel direct discriminant analysis is employed, which combines the merit of direct linear discriminant analysis with that of kernel trick. In order to demonstrate its better robustness to the complex and nonlinear variations of real face images, such as illumination, facial expression, scale and pose variations, experiments are carried out on the Olivetti Research Laboratory, Yale and self-built face databases. The results indicate that in contrast to kernel principal component analysis and kernel linear discriminant analysis, the method can achieve lower (7%) error rate using only a very small set of features. Furthermore, a new corrected kernel model is proposed to improve the recognition performance. Experimental results confirm its superiority (1% in terms of recognition rate) to other polynomial kernel models.

  7. Parameter-Free Spectral Kernel Learning

    CERN Document Server

    Mao, Qi

    2012-01-01

    Due to the growing ubiquity of unlabeled data, learning with unlabeled data is attracting increasing attention in machine learning. In this paper, we propose a novel semi-supervised kernel learning method which can seamlessly combine manifold structure of unlabeled data and Regularized Least-Squares (RLS) to learn a new kernel. Interestingly, the new kernel matrix can be obtained analytically with the use of spectral decomposition of graph Laplacian matrix. Hence, the proposed algorithm does not require any numerical optimization solvers. Moreover, by maximizing kernel target alignment on labeled data, we can also learn model parameters automatically with a closed-form solution. For a given graph Laplacian matrix, our proposed method does not need to tune any model parameter including the tradeoff parameter in RLS and the balance parameter for unlabeled data. Extensive experiments on ten benchmark datasets show that our proposed two-stage parameter-free spectral kernel learning algorithm can obtain comparable...

  8. Heat-kernel approach for scattering

    CERN Document Server

    Li, Wen-Du

    2015-01-01

    An approach for solving scattering problems, based on two quantum field theory methods, the heat kernel method and the scattering spectral method, is constructed. This approach has a special advantage: it is not only one single approach; it is indeed a set of approaches for solving scattering problems. Concretely, we build a bridge between a scattering problem and the heat kernel method, so that each method of calculating heat kernels can be converted into a method of solving a scattering problem. As applications, we construct two approaches for solving scattering problems based on two heat-kernel expansions: the Seeley-DeWitt expansion and the covariant perturbation theory. In order to apply the heat kernel method to scattering problems, we also calculate two off-diagonal heat-kernel expansions in the frames of the Seeley-DeWitt expansion and the covariant perturbation theory, respectively. Moreover, as an alternative application of the relation between heat kernels and partial-wave phase shifts presented in...

  9. Ideal regularization for learning kernels from labels.

    Science.gov (United States)

    Pan, Binbin; Lai, Jianhuang; Shen, Lixin

    2014-08-01

    In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.

  10. Kernel score statistic for dependent data.

    Science.gov (United States)

    Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike

    2014-01-01

    The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.

  11. Kernel-based Maximum Entropy Clustering

    Institute of Scientific and Technical Information of China (English)

    JIANG Wei; QU Jiao; LI Benxi

    2007-01-01

    With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.

  12. Kernel adaptive filtering a comprehensive introduction

    CERN Document Server

    Liu, Weifeng; Haykin, Simon

    2010-01-01

    Online learning from a signal processing perspective There is increased interest in kernel learning algorithms in neural networks and a growing need for nonlinear adaptive algorithms in advanced signal processing, communications, and controls. Kernel Adaptive Filtering is the first book to present a comprehensive, unifying introduction to online learning algorithms in reproducing kernel Hilbert spaces. Based on research being conducted in the Computational Neuro-Engineering Laboratory at the University of Florida and in the Cognitive Systems Laboratory at McMaster University, O

  13. Multiple Operator-valued Kernel Learning

    CERN Document Server

    Kadri, Hachem; Bach, Francis; Preux, Philippe

    2012-01-01

    This paper addresses the problem of learning a finite linear combination of operator-valued kernels. We study this problem in the case of kernel ridge regression for functional responses with a lr-norm constraint on the combination coefficients. We propose a multiple operator-valued kernel learning algorithm based on solving a system of linear operator equations by using a block coordinate descent procedure. We experimentally validate our approach on a functional regression task in the context of finger movement prediction in Brain-Computer Interface (BCI).

  14. Polynomial Kernelizations for $\\MINF_1$ and $\\MNP$

    CERN Document Server

    Kratsch, Stefan

    2009-01-01

    The relation of constant-factor approximability to fixed-parameter tractability and kernelization is a long-standing open question. We prove that two large classes of constant-factor approximable problems, namely $\\MINF_1$ and $\\MNP$, including the well-known subclass $\\MSNP$, admit polynomial kernelizations for their natural decision versions. This extends results of Cai and Chen (JCSS 1997), stating that the standard parameterizations of problems in $\\MSNP$ and $\\MINF_1$ are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. ICALP 2008).

  15. Approximating W projection as a separable kernel

    OpenAIRE

    Merry, Bruce

    2015-01-01

    W projection is a commonly-used approach to allow interferometric imaging to be accelerated by Fast Fourier Transforms (FFTs), but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid to high frequencies. We also show that hybrid imaging algorithms combining W projection with ...

  16. Approximating W projection as a separable kernel

    Science.gov (United States)

    Merry, Bruce

    2016-02-01

    W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.

  17. Extension of Wirtinger's Calculus in Reproducing Kernel Hilbert Spaces and the Complex Kernel LMS

    CERN Document Server

    Bouboulis, Pantelis

    2010-01-01

    Over the last decade, kernel methods for nonlinear processing have successfully been used in the machine learning community. The primary mathematical tool employed in these methods is the notion of the Reproducing Kernel Hilbert Space. However, so far, the emphasis has been on batch techniques. It is only recently, that online techniques have been considered in the context of adaptive signal processing tasks. Moreover, these efforts have only been focussed on and real valued data sequences. To the best of our knowledge, no kernel-based strategy has been developed, so far, that is able to deal with complex valued signals. In this paper, we present a general framework to attack the problem of adaptive filtering of complex signals, using either real reproducing kernels, taking advantage of a technique called \\textit{complexification} of real RKHSs, or complex reproducing kernels, highlighting the use of the complex gaussian kernel. In order to derive gradients of operators that need to be defined on the associat...

  18. Kernel map compression for speeding the execution of kernel-based methods.

    Science.gov (United States)

    Arif, Omar; Vela, Patricio A

    2011-06-01

    The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss.

  19. Anatomically informed convolution kernels for the projection of fMRI data on the cortical surface.

    Science.gov (United States)

    Operto, Grégory; Bulot, Rémy; Anton, Jean-Luc; Coulon, Olivier

    2006-01-01

    We present here a method that aims at producing representations of functional brain data on the cortical surface from functional MRI volumes. Such representations are required for subsequent cortical-based functional analysis. We propose a projection technique based on the definition, around each node of the grey/white matter interface mesh, of convolution kernels whose shape and distribution rely on the geometry of the local anatomy. For one anatomy, a set of convolution kernels is computed that can be used to project any functional data registered with this anatomy. The method is presented together with experiments on synthetic data and real statistical t-maps.

  20. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    Jonge, M. de

    2002-01-01

    The Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what component they real

  1. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...

  2. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent...

  3. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...

  4. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...

  5. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  6. Relative n-widths of periodic convolution classes with NCVD-kernel and B-kernel

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    In this paper,we consider the relative n-widths of two kinds of periodic convolution classes,Kp(K) and Bp(G),whose convolution kernels are NCVD-kernel K and B-kernel G. The asymptotic estimations of Kn(Kp(K),Kp(K))q and Kn(Bp(G),Bp(G))q are obtained for p=1 and ∞,1≤ q≤∞.

  7. Reproducing Kernel for D2(Ω, ρ) and Metric Induced by Reproducing Kernel

    Institute of Scientific and Technical Information of China (English)

    ZHAO Zhen Gang

    2009-01-01

    An important property of the reproducing kernel of D2(Ω, ρ) is obtained and the reproducing kernels for D2(Ω, ρ) are calculated when Ω = Bn × Bn and ρ are some special functions. A reproducing kernel is used to construct a semi-positive definite matrix and a distance function defined on Ω×Ω. An inequality is obtained about the distance function and the pseudodistance induced by the matrix.

  8. Discriminant Kernel Assignment for Image Coding.

    Science.gov (United States)

    Deng, Yue; Zhao, Yanyu; Ren, Zhiquan; Kong, Youyong; Bao, Feng; Dai, Qionghai

    2017-06-01

    This paper proposes discriminant kernel assignment (DKA) in the bag-of-features framework for image representation. DKA slightly modifies existing kernel assignment to learn width-variant Gaussian kernel functions to perform discriminant local feature assignment. When directly applying gradient-descent method to solve DKA, the optimization may contain multiple time-consuming reassignment implementations in iterations. Accordingly, we introduce a more practical way to locally linearize the DKA objective and the difficult task is cast as a sequence of easier ones. Since DKA only focuses on the feature assignment part, it seamlessly collaborates with other discriminative learning approaches, e.g., discriminant dictionary learning or multiple kernel learning, for even better performances. Experimental evaluations on multiple benchmark datasets verify that DKA outperforms other image assignment approaches and exhibits significant efficiency in feature coding.

  9. Multiple Kernel Spectral Regression for Dimensionality Reduction

    Directory of Open Access Journals (Sweden)

    Bing Liu

    2013-01-01

    Full Text Available Traditional manifold learning algorithms, such as locally linear embedding, Isomap, and Laplacian eigenmap, only provide the embedding results of the training samples. To solve the out-of-sample extension problem, spectral regression (SR solves the problem of learning an embedding function by establishing a regression framework, which can avoid eigen-decomposition of dense matrices. Motivated by the effectiveness of SR, we incorporate multiple kernel learning (MKL into SR for dimensionality reduction. The proposed approach (termed MKL-SR seeks an embedding function in the Reproducing Kernel Hilbert Space (RKHS induced by the multiple base kernels. An MKL-SR algorithm is proposed to improve the performance of kernel-based SR (KSR further. Furthermore, the proposed MKL-SR algorithm can be performed in the supervised, unsupervised, and semi-supervised situation. Experimental results on supervised classification and semi-supervised classification demonstrate the effectiveness and efficiency of our algorithm.

  10. Quantum kernel applications in medicinal chemistry.

    Science.gov (United States)

    Huang, Lulu; Massa, Lou

    2012-07-01

    Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.

  11. Kernel method-based fuzzy clustering algorithm

    Institute of Scientific and Technical Information of China (English)

    Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping

    2005-01-01

    The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.

  12. Kernel representations for behaviors over finite rings

    NARCIS (Netherlands)

    Kuijper, M.; Pinto, R.; Polderman, J.W.; Yamamoto, Y.

    2006-01-01

    In this paper we consider dynamical systems finite rings. The rings that we study are the integers modulo a power of a given prime. We study the theory of representations for such systems, in particular kernel representations.

  13. Ensemble Approach to Building Mercer Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...

  14. Convolution kernels for multi-wavelength imaging

    National Research Council Canada - National Science Library

    Boucaud, Alexandre; Bocchio, Marco; Abergel, Alain; Orieux, François; Dole, Hervé; Hadj-Youcef, Mohamed Amine

    2016-01-01

    .... Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been...

  15. Difference image analysis: Automatic kernel design using information criteria

    CERN Document Server

    Bramich, D M; Alsubai, K A; Bachelet, E; Mislis, D; Parley, N

    2015-01-01

    We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially-invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularisation. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unreg...

  16. Preparing UO2 kernels by gelcasting

    Institute of Scientific and Technical Information of China (English)

    GUO Wenli; LIANG Tongxiang; ZHAO Xingyu; HAO Shaochang; LI Chengliang

    2009-01-01

    A process named gel-casting has been developed for the production of dense UO2 kernels for the high-ten-temperature gas-cooled reactor. Compared with the sol-gel process, the green microspheres can be got by dispersing the U3O8 slurry in gelcasting process, which means that gelcasting is a more facilitative process with less waste in fabricating UO2 kernels. The heat treatment.

  17. The Bergman kernel functions on Hua domains

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    We get the Bergman kernel functions in explicit formulas on four types of Hua domain.There are two key steps: First, we give the holomorphic automorphism groups of four types of Hua domain; second, we introduce the concept of semi-Reinhardt domain and give their complete orthonormal systems. Based on these two aspects we obtain the Bergman kernel function in explicit formulas on Hua domains.

  18. Fractal Weyl law for Linux Kernel Architecture

    CERN Document Server

    Ermann, L; Shepelyansky, D L

    2010-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be $\

  19. Varying kernel density estimation on ℝ+

    Science.gov (United States)

    Mnatsakanov, Robert; Sarkisian, Khachatur

    2015-01-01

    In this article a new nonparametric density estimator based on the sequence of asymmetric kernels is proposed. This method is natural when estimating an unknown density function of a positive random variable. The rates of Mean Squared Error, Mean Integrated Squared Error, and the L1-consistency are investigated. Simulation studies are conducted to compare a new estimator and its modified version with traditional kernel density construction. PMID:26740729

  20. Adaptively Learning the Crowd Kernel

    CERN Document Server

    Tamuz, Omer; Belongie, Serge; Shamir, Ohad; Kalai, Adam Tauman

    2011-01-01

    We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form "is object 'a' more similar to 'b' or to 'c'?" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the "crowd kernel." The runtime (empirically observed to be linear) and cost (about $0.15 per object) of the algorithm are small enough to permit its application to databases of thousands of objects. The distance matrix provided by the algorithm allows for the development of an intuitive and powerful sequential, interactive search algorithm which we demonstrate for a variety of visual stimuli. We present quantitative results that demonstrate the benefit in cost and time of our approach compared to a nonadaptive approach. We also show the ability of our appr...

  1. Evaluating the Gradient of the Thin Wire Kernel

    Science.gov (United States)

    Wilton, Donald R.; Champagne, Nathan J.

    2008-01-01

    Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.

  2. On the Inclusion Relation of Reproducing Kernel Hilbert Spaces

    OpenAIRE

    Zhang, Haizhang; Zhao, Liang

    2011-01-01

    To help understand various reproducing kernels used in applied sciences, we investigate the inclusion relation of two reproducing kernel Hilbert spaces. Characterizations in terms of feature maps of the corresponding reproducing kernels are established. A full table of inclusion relations among widely-used translation invariant kernels is given. Concrete examples for Hilbert-Schmidt kernels are presented as well. We also discuss the preservation of such a relation under various operations of ...

  3. A Visual Approach to Investigating Shared and Global Memory Behavior of CUDA Kernels

    KAUST Repository

    Rosen, Paul

    2013-06-01

    We present an approach to investigate the memory behavior of a parallel kernel executing on thousands of threads simultaneously within the CUDA architecture. Our top-down approach allows for quickly identifying any significant differences between the execution of the many blocks and warps. As interesting warps are identified, we allow further investigation of memory behavior by visualizing the shared memory bank conflicts and global memory coalescence, first with an overview of a single warp with many operations and, subsequently, with a detailed view of a single warp and a single operation. We demonstrate the strength of our approach in the context of a parallel matrix transpose kernel and a parallel 1D Haar Wavelet transform kernel. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  4. Kernel abortion in maize : I. Carbohydrate concentration patterns and Acid invertase activity of maize kernels induced to abort in vitro.

    Science.gov (United States)

    Hanft, J M; Jones, R J

    1986-06-01

    Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.

  5. Maize kernel hardness classification by near infrared (NIR) hyperspectral imaging and multivariate data analysis.

    Science.gov (United States)

    Williams, Paul; Geladi, Paul; Fox, Glen; Manley, Marena

    2009-10-27

    The use of near infrared (NIR) hyperspectral imaging and hyperspectral image analysis for distinguishing between hard, intermediate and soft maize kernels from inbred lines was evaluated. NIR hyperspectral images of two sets (12 and 24 kernels) of whole maize kernels were acquired using a Spectral Dimensions MatrixNIR camera with a spectral range of 960-1662 nm and a sisuChema SWIR (short wave infrared) hyperspectral pushbroom imaging system with a spectral range of 1000-2498 nm. Exploratory principal component analysis (PCA) was used on absorbance images to remove background, bad pixels and shading. On the cleaned images, PCA could be used effectively to find histological classes including glassy (hard) and floury (soft) endosperm. PCA illustrated a distinct difference between glassy and floury endosperm along principal component (PC) three on the MatrixNIR and PC two on the sisuChema with two distinguishable clusters. Subsequently partial least squares discriminant analysis (PLS-DA) was applied to build a classification model. The PLS-DA model from the MatrixNIR image (12 kernels) resulted in root mean square error of prediction (RMSEP) value of 0.18. This was repeated on the MatrixNIR image of the 24 kernels which resulted in RMSEP of 0.18. The sisuChema image yielded RMSEP value of 0.29. The reproducible results obtained with the different data sets indicate that the method proposed in this paper has a real potential for future classification uses.

  6. Contextual kernel and spectral methods for learning the semantics of images.

    Science.gov (United States)

    Lu, Zhiwu; Ip, Horace H S; Peng, Yuxin

    2011-06-01

    This paper presents contextual kernel and spectral methods for learning the semantics of images that allow us to automatically annotate an image with keywords. First, to exploit the context of visual words within images for automatic image annotation, we define a novel spatial string kernel to quantify the similarity between images. Specifically, we represent each image as a 2-D sequence of visual words and measure the similarity between two 2-D sequences using the shared occurrences of s -length 1-D subsequences by decomposing each 2-D sequence into two orthogonal 1-D sequences. Based on our proposed spatial string kernel, we further formulate automatic image annotation as a contextual keyword propagation problem, which can be solved very efficiently by linear programming. Unlike the traditional relevance models that treat each keyword independently, the proposed contextual kernel method for keyword propagation takes into account the semantic context of annotation keywords and propagates multiple keywords simultaneously. Significantly, this type of semantic context can also be incorporated into spectral embedding for refining the annotations of images predicted by keyword propagation. Experiments on three standard image datasets demonstrate that our contextual kernel and spectral methods can achieve significantly better results than the state of the art.

  7. Single pass kernel -means clustering method

    Indian Academy of Sciences (India)

    T Hitendra Sarma; P Viswanath; B Eswara Reddy

    2013-06-01

    In unsupervised classification, kernel -means clustering method has been shown to perform better than conventional -means clustering method in identifying non-isotropic clusters in a data set. The space and time requirements of this method are $O(n^2)$, where is the data set size. Because of this quadratic time complexity, the kernel -means method is not applicable to work with large data sets. The paper proposes a simple and faster version of the kernel -means clustering method, called single pass kernel k-means clustering method. The proposed method works as follows. First, a random sample $\\mathcal{S}$ is selected from the data set $\\mathcal{D}$. A partition $\\Pi_{\\mathcal{S}}$ is obtained by applying the conventional kernel -means method on the random sample $\\mathcal{S}$. The novelty of the paper is, for each cluster in $\\Pi_{\\mathcal{S}}$, the exact cluster center in the input space is obtained using the gradient descent approach. Finally, each unsampled pattern is assigned to its closest exact cluster center to get a partition of the entire data set. The proposed method needs to scan the data set only once and it is much faster than the conventional kernel -means method. The time complexity of this method is $O(s^2+t+nk)$ where is the size of the random sample $\\mathcal{S}$, is the number of clusters required, and is the time taken by the gradient descent method (to find exact cluster centers). The space complexity of the method is $O(s^2)$. The proposed method can be easily implemented and is suitable for large data sets, like those in data mining applications. Experimental results show that, with a small loss of quality, the proposed method can significantly reduce the time taken than the conventional kernel -means clustering method. The proposed method is also compared with other recent similar methods.

  8. Kernel-Based Reconstruction of Graph Signals

    Science.gov (United States)

    Romero, Daniel; Ma, Meng; Giannakis, Georgios B.

    2017-02-01

    A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.

  9. A new Mercer sigmoid kernel for clinical data classification.

    Science.gov (United States)

    Carrington, André M; Fieguth, Paul W; Chen, Helen H

    2014-01-01

    In classification with Support Vector Machines, only Mercer kernels, i.e. valid kernels, such as the Gaussian RBF kernel, are widely accepted and thus suitable for clinical data. Practitioners would also like to use the sigmoid kernel, a non-Mercer kernel, but its range of validity is difficult to determine, and even within range its validity is in dispute. Despite these shortcomings the sigmoid kernel is used by some, and two kernels in the literature attempt to emulate and improve upon it. We propose the first Mercer sigmoid kernel, that is therefore trustworthy for the classification of clinical data. We show the similarity between the Mercer sigmoid kernel and the sigmoid kernel and, in the process, identify a normalization technique that improves the classification accuracy of the latter. The Mercer sigmoid kernel achieves the best mean accuracy on three clinical data sets, detecting melanoma in skin lesions better than the most popular kernels; while with non-clinical data sets it has no significant difference in median accuracy as compared with the Gaussian RBF kernel. It consistently classifies some points correctly that the Gaussian RBF kernel does not and vice versa.

  10. Pattern Classification of Signals Using Fisher Kernels

    Directory of Open Access Journals (Sweden)

    Yashodhan Athavale

    2012-01-01

    Full Text Available The intention of this study is to gauge the performance of Fisher kernels for dimension simplification and classification of time-series signals. Our research work has indicated that Fisher kernels have shown substantial improvement in signal classification by enabling clearer pattern visualization in three-dimensional space. In this paper, we will exhibit the performance of Fisher kernels for two domains: financial and biomedical. The financial domain study involves identifying the possibility of collapse or survival of a company trading in the stock market. For assessing the fate of each company, we have collected financial time-series composed of weekly closing stock prices in a common time frame, using Thomson Datastream software. The biomedical domain study involves knee signals collected using the vibration arthrometry technique. This study uses the severity of cartilage degeneration for classifying normal and abnormal knee joints. In both studies, we apply Fisher Kernels incorporated with a Gaussian mixture model (GMM for dimension transformation into feature space, which is created as a three-dimensional plot for visualization and for further classification using support vector machines. From our experiments we observe that Fisher Kernel usage fits really well for both kinds of signals, with low classification error rates.

  11. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  12. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  13. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  14. The scalar field kernel in cosmological spaces

    Energy Technology Data Exchange (ETDEWEB)

    Koksma, Jurjen F; Prokopec, Tomislav [Institute for Theoretical Physics (ITP) and Spinoza Institute, Utrecht University, Postbus 80195, 3508 TD Utrecht (Netherlands); Rigopoulos, Gerasimos I [Helsinki Institute of Physics, University of Helsinki, PO Box 64, FIN-00014 (Finland)], E-mail: J.F.Koksma@phys.uu.nl, E-mail: T.Prokopec@phys.uu.nl, E-mail: gerasimos.rigopoulos@helsinki.fi

    2008-06-21

    We construct the quantum-mechanical evolution operator in the functional Schroedinger picture-the kernel-for a scalar field in spatially homogeneous FLRW spacetimes when the field is (a) free and (b) coupled to a spacetime-dependent source term. The essential element in the construction is the causal propagator, linked to the commutator of two Heisenberg picture scalar fields. We show that the kernels can be expressed solely in terms of the causal propagator and derivatives of the causal propagator. Furthermore, we show that our kernel reveals the standard light cone structure in FLRW spacetimes. We finally apply the result to Minkowski spacetime, to de Sitter spacetime and calculate the forward time evolution of the vacuum in a general FLRW spacetime.

  15. Robust Visual Tracking via Fuzzy Kernel Representation

    Directory of Open Access Journals (Sweden)

    Zhiqiang Wen

    2013-05-01

    Full Text Available A robust visual kernel tracking approach is presented for solving the problem of existing background pixels in object model. At first, after definition of fuzzy set on image is given, a fuzzy factor is embedded into object model to form the fuzzy kernel representation. Secondly, a fuzzy membership functions are generated by center-surround approach and log likelihood ratio of feature distributions. Thirdly, details about fuzzy kernel tracking algorithm is provided. After that, methods of parameter selection and performance evaluation for tracking algorithm are proposed. At last, a mass of experimental results are done to show our method can reduce the influence of the incomplete representation of object model via integrating both color features and background features.

  16. Fractal Weyl law for Linux Kernel architecture

    Science.gov (United States)

    Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.

    2011-01-01

    We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.

  17. Tile-Compressed FITS Kernel for IRAF

    Science.gov (United States)

    Seaman, R.

    2011-07-01

    The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.

  18. A kernel-based approach for biomedical named entity recognition.

    Science.gov (United States)

    Patra, Rakesh; Saha, Sujan Kumar

    2013-01-01

    Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.

  19. Full Waveform Inversion Using Waveform Sensitivity Kernels

    Science.gov (United States)

    Schumacher, Florian; Friederich, Wolfgang

    2013-04-01

    We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver

  20. Inverse of the String Theory KLT Kernel

    CERN Document Server

    Mizera, Sebastian

    2016-01-01

    The field theory Kawai-Lewellen-Tye (KLT) kernel, which relates scattering amplitudes of gravitons and gluons, turns out to be the inverse of a matrix whose components are bi-adjoint scalar partial amplitudes. In this note we propose an analogous construction for the string theory KLT kernel. We present simple diagrammatic rules for the computation of the $\\alpha'$-corrected bi-adjoint scalar amplitudes that are exact in $\\alpha'$. We find compact expressions in terms of graphs, where the standard Feynman propagators $1/p^2$ are replaced by either $1/\\sin (\\pi \\alpha' p^2)$ or $1/\\tan (\\pi \\alpha' p^2)$, which is determined by a recursive procedure.

  1. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  2. Volatile compound formation during argan kernel roasting.

    Science.gov (United States)

    El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe

    2013-01-01

    Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.

  3. Learning Rates for -Regularized Kernel Classifiers

    Directory of Open Access Journals (Sweden)

    Hongzhi Tong

    2013-01-01

    Full Text Available We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.

  4. Face Recognition Using Kernel Discriminant Analysis

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    Linear Discrimiant Analysis (LDA) has demonstrated their success in face recognition. But LDA is difficult to handle the high nonlinear problems, such as changes of large viewpoint and illumination in face recognition. In order to overcome these problems, we investigate Kernel Discriminant Analysis (KDA) for face recognition. This approach adopts the kernel functions to replace the dot products of nonlinear mapping in the high dimensional feature space, and then the nonlinear problem can be solved in the input space conveniently without explicit mapping. Two face databases are used to test KDA approach. The results show that our approach outperforms the conventional PCA(Eigenface) and LDA(Fisherface) approaches.

  5. Regularization techniques for PSF-matching kernels - I. Choice of kernel basis

    Science.gov (United States)

    Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.

    2012-09-01

    We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing

  6. Kernel methods and minimum contrast estimators for empirical deconvolution

    CERN Document Server

    Delaigle, Aurore

    2010-01-01

    We survey classical kernel methods for providing nonparametric solutions to problems involving measurement error. In particular we outline kernel-based methodology in this setting, and discuss its basic properties. Then we point to close connections that exist between kernel methods and much newer approaches based on minimum contrast techniques. The connections are through use of the sinc kernel for kernel-based inference. This `infinite order' kernel is not often used explicitly for kernel-based deconvolution, although it has received attention in more conventional problems where measurement error is not an issue. We show that in a comparison between kernel methods for density deconvolution, and their counterparts based on minimum contrast, the two approaches give identical results on a grid which becomes increasingly fine as the bandwidth decreases. In consequence, the main numerical differences between these two techniques are arguably the result of different approaches to choosing smoothing parameters.

  7. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  8. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  9. HEAT KERNEL AND HARDY'S THEOREM FOR JACOBI TRANSFORM

    Institute of Scientific and Technical Information of China (English)

    T. KAWAZOE; LIU JIANMING(刘建明)

    2003-01-01

    In this paper, the authors obtain sharp upper and lower bounds for the heat kernel associatedwith Jacobi transform, and get some analogues of Hardy's Theorem for Jacobi transform byusing the sharp estimate of the heat kernel.

  10. Analysis of maize ( Zea mays ) kernel density and volume using microcomputed tomography and single-kernel near-infrared spectroscopy.

    Science.gov (United States)

    Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark

    2013-11-20

    Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.

  11. Evaluation of sintering effects on SiC-incorporated UO2 kernels under Ar and Ar-4%H2 environments

    Science.gov (United States)

    Silva, Chinthaka M.; Lindemer, Terrence B.; Hunt, Rodney D.; Collins, Jack L.; Terrani, Kurt A.; Snead, Lance L.

    2013-11-01

    Silicon carbide (SiC) is suggested as an oxygen getter in UO2 kernels used for tristructural isotropic (TRISO) particle fuels and to prevent kernel migration during irradiation. Scanning electron microscopy and X-ray diffractometry analyses performed on sintered kernels verified that an internal gelation process can be used to incorporate SiC in UO2 fuel kernels. Even though the presence of UC in either argon (Ar) or Ar-4%H2 sintered samples suggested a lowering of the SiC up to 3.5-1.4 mol%, respectively, the presence of other silicon-related chemical phases indicates the preservation of silicon in the kernels during sintering process. UC formation was presumed to occur by two reactions. The first was by the reaction of SiC with its protective SiO2 oxide layer on SiC grains to produce volatile SiO and free carbon that subsequently reacted with UO2 to form UC. The second process was direct UO2 reaction with SiC grains to form SiO, CO, and UC. A slightly higher density and UC content were observed in the sample sintered in Ar-4%H2, but both atmospheres produced kernels with ˜95% of theoretical density. It is suggested that incorporating CO in the sintering gas could prevent UC formation and preserve the initial SiC content.

  12. An Adaptive Background Subtraction Method Based on Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    Mignon Park

    2012-09-01

    Full Text Available In this paper, a pixel-based background modeling method, which uses nonparametric kernel density estimation, is proposed. To reduce the burden of image storage, we modify the original KDE method by using the first frame to initialize it and update it subsequently at every frame by controlling the learning rate according to the situations. We apply an adaptive threshold method based on image changes to effectively subtract the dynamic backgrounds. The devised scheme allows the proposed method to automatically adapt to various environments and effectively extract the foreground. The method presented here exhibits good performance and is suitable for dynamic background environments. The algorithm is tested on various video sequences and compared with other state-of-the-art background subtraction methods so as to verify its performance.

  13. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    Science.gov (United States)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  14. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  15. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    the kernel function which depends on the application and the model user. This research uses the most popular kernel function, the radial basis...an important role in the nation’s economy. Unfortunately, the system’s reliability is declining due to the aging components of the network [Grier...kernel function. Gaussian Bayesian kernel models became very popular recently and were extended and applied to a number of classification problems. An

  16. An Extended Ockham Algebra with Endomorphism Kernel Property

    Institute of Scientific and Technical Information of China (English)

    Jie FANG

    2007-01-01

    An algebraic structure (∮) is said to have the endomorphism kernel property if every congruence on (∮) , other than the universal congruence, is the kernel of an endomorphism on (∮) .Inthis paper, we consider the EKP (that is, endomorphism kernel property) for an extended Ockham algebra (∮) . In particular, we describe the structure of the finite symmetric extended de Morgan algebras having EKP.

  17. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  18. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of...

  19. Multiple spectral kernel learning and a gaussian complexity computation.

    Science.gov (United States)

    Reyhani, Nima

    2013-07-01

    Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.

  20. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  1. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...

  2. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...

  3. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  4. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  5. Kernel Temporal Differences for Neural Decoding

    Directory of Open Access Journals (Sweden)

    Jihye Bae

    2015-01-01

    Full Text Available We study the feasibility and capability of the kernel temporal difference (KTD(λ algorithm for neural decoding. KTD(λ is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm’s convergence can be guaranteed for policy evaluation. The algorithm’s nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement. KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey’s neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm’s capabilities in reinforcement learning brain machine interfaces.

  6. Bergman kernel and complex singularity exponent

    Institute of Scientific and Technical Information of China (English)

    LEE; HanJin

    2009-01-01

    We give a precise estimate of the Bergman kernel for the model domain defined by Ω F={(z,w) ∈ C n+1:Im w |F (z)| 2 > 0},where F=(f 1,...,f m) is a holomorphic map from C n to C m,in terms of the complex singularity exponent of F.

  7. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...

  8. Analytic properties of the Virasoro modular kernel

    CERN Document Server

    Nemkov, Nikita

    2016-01-01

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block.

  9. A Cubic Kernel for Feedback Vertex Set

    NARCIS (Netherlands)

    Bodlaender, H.L.

    2006-01-01

    The FEEDBACK VERTEX SET problem on unweighted, undirected graphs is considered. Improving upon a result by Burrage et al. [7], we show that this problem has a kernel with O(κ3) vertices, i.e., there is a polynomial time algorithm, that given a graph G and an integer κ, finds a graph G' and integer

  10. Analytic properties of the Virasoro modular kernel

    Energy Technology Data Exchange (ETDEWEB)

    Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)

    2017-06-15

    On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)

  11. Hyperbolic L2-modules with Reproducing Kernels

    Institute of Scientific and Technical Information of China (English)

    David EELPODE; Frank SOMMEN

    2006-01-01

    Abstract In this paper, the Dirac operator on the Klein model for the hyperbolic space is considered. A function space containing L2-functions on the sphere Sm-1 in (R)m, which are boundary values of solutions for this operator, is defined, and it is proved that this gives rise to a Hilbert module with a reproducing kernel.

  12. Protein Structure Prediction Using String Kernels

    Science.gov (United States)

    2006-03-03

    Prediction using String Kernels 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...consists of 4352 sequences from SCOP version 1.53 extracted from the Astral database, grouped into families and superfamilies. The dataset is processed

  13. Bergman kernel and complex singularity exponent

    Institute of Scientific and Technical Information of China (English)

    CHEN BoYong; LEE HanJin

    2009-01-01

    We give a precise estimate of the Bergman kernel for the model domain defined by Ω_F = {(z,w) ∈ C~(n+1) : Imw - |F(z)|~2 > 0},where F = (f_1,... ,f_m) is a holomorphic map from C~n to C~m,in terms of the complex singularity exponent of F.

  14. Symbol recognition with kernel density matching.

    Science.gov (United States)

    Zhang, Wan; Wenyin, Liu; Zhang, Kun

    2006-12-01

    We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.

  15. Developing Linux kernel space device driver

    Institute of Scientific and Technical Information of China (English)

    Zheng Wei; Wang Qinruo; Wu Naiyou

    2003-01-01

    This thesis introduces how to develop kernel level device drivers on Linux platform in detail. On the basis of comparing proc file system with dev file system, we choose PCI devices and USB devices as instances to introduce the method of writing device drivers for character devices by using these two file systems.

  16. Heat Kernel Renormalization on Manifolds with Boundary

    OpenAIRE

    Albert, Benjamin I.

    2016-01-01

    In the monograph Renormalization and Effective Field Theory, Costello gave an inductive position space renormalization procedure for constructing an effective field theory that is based on heat kernel regularization of the propagator. In this paper, we extend Costello's renormalization procedure to a class of manifolds with boundary. In addition, we reorganize the presentation of the preexisting material, filling in details and strengthening the results.

  17. Convolution kernels for multi-wavelength imaging

    Science.gov (United States)

    Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.

    2016-12-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher

  18. Covariant derivative expansion of the heat kernel

    Energy Technology Data Exchange (ETDEWEB)

    Salcedo, L.L. [Universidad de Granada, Departamento de Fisica Moderna, Granada (Spain)

    2004-11-01

    Using the technique of labeled operators, compact explicit expressions are given for all traced heat kernel coefficients containing zero, two, four and six covariant derivatives, and for diagonal coefficients with zero, two and four derivatives. The results apply to boundaryless flat space-times and arbitrary non-Abelian scalar and gauge background fields. (orig.)

  19. A Kernel Approach to Multi-Task Learning with Task-Specific Kernels

    Institute of Scientific and Technical Information of China (English)

    Wei Wu; Hang Li; Yun-Hua Hu; Rong Jin

    2012-01-01

    Several kernel-based methods for multi-task learning have been proposed,which leverage relations among tasks as regularization to enhance the overall learning accuracies.These methods assume that the tasks share the same kernel,which could limit their applications because in practice different tasks may need different kernels.The main challenge of introducing multiple kernels into multiple tasks is that models from different reproducing kernel Hilbert spaces (RKHSs) are not comparable,making it difficult to exploit relations among tasks.This paper addresses the challenge by formalizing the problem in the square integrable space (SIS).Specially,it proposes a kernel-based method which makes use of a regularization term defined in SIS to represent task relations.We prove a new representer theorem for the proposed approach in SIS.We further derive a practical method for solving the learning problem and conduct consistency analysis of the method.We discuss the relationship between our method and an existing method.We also give an SVM (support vector machine)-based implementation of our method for multi-label classification.Experiments on an artificial example and two real-world datasets show that the proposed method performs better than the existing method.

  20. Fast image filters as an alternative to reconstruction kernels in computed tomography

    Science.gov (United States)

    Flohr, Thomas; Schaller, Stefan; Stadler, Alexander; Brandhuber, Wolfgang; Niethammer, Matthias U.; Klingenbeck-Regn, Klaus W.; Steffen, Peter

    2001-07-01

    In Computed Tomography, axial resolution is determined by the slice collimation and the spiral algorithm, while in-plane resolution is determined by the reconstruction kernel. Both choices select a tradeoff between image resolution (sharpness) and pixel noise. We investigated an alternative approach using default settings for image reconstruction which provide narrow reconstructed slice-width and high in-plane resolution. If smoother images are desired, we filter the original (sharp) images, instead of performing a new reconstruction with a smoother kernel. A suitable filter function in the frequency domain is the ratio of smooth and original (sharp) kernel. Efficient implementation was achieved by a Fourier transform of this ratio to the spatial domain. Separating the 2D spatial filtering into two subsequent 1D filtering stages in x- and y-direction further reduces computational complexity. Using this approach, arbitrarily oriented multi-planar reformats (MPRs) can be treated in exactly the same way as axial images. Due to efficient implementation, interactive modification of the filter settings becomes possible, which completely replace the variety of different reconstruction kernels. We implemented a further promising application of the method to thorax imaging, where different regions of the thorax (lungs and mediastinum) are jointly presented in the same images using different filter settings and different windowing.

  1. Carbothermic Synthesis of ~820- m UN Kernels. Investigation of Process Variables

    Energy Technology Data Exchange (ETDEWEB)

    Lindemer, Terrence [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Silva, Chinthaka M [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Henry, Jr, John James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); McMurray, Jake W [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jolly, Brian C [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hunt, Rodney Dale [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Terrani, Kurt A [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-06-01

    This report details the continued investigation of process variables involved in converting sol-gel-derived, urainia-carbon microspheres to ~820-μm-dia. UN fuel kernels in flow-through, vertical refractory-metal crucibles at temperatures up to 2123 K. Experiments included calcining of air-dried UO3-H2O-C microspheres in Ar and H2-containing gases, conversion of the resulting UO2-C kernels to dense UO2:2UC in the same gases and vacuum, and its conversion in N2 to in UC1-xNx. The thermodynamics of the relevant reactions were applied extensively to interpret and control the process variables. Producing the precursor UO2:2UC kernel of ~96% theoretical density was required, but its subsequent conversion to UC1-xNx at 2123 K was not accompanied by sintering and resulted in ~83-86% of theoretical density. Decreasing the UC1-xNx kernel carbide component via HCN evolution was shown to be quantitatively consistent with present and past experiments and the only useful application of H2 in the entire process.

  2. Roles of Carbohydrate Supply and Ethylene,Polyamines in Maize Kernel Set

    Institute of Scientific and Technical Information of China (English)

    Han-Yu Feng; Zhi-Min Wang; Fan-Na Kong; Min-Jie Zhang; Shun-Li Zhou

    2011-01-01

    Glucose appears to have an antagonistic relationship with ethylene and ethylene and polyaminesappear to play antagonistic roles in the abortion of seeds and fruits.Moreover,ethylene,spermidine,and spermine share a common biosynthetic precursor.The synchronous changes of them and therelationships with kernel set are currently unclear.Here,we stimulated maize(Zea mays L.)apical kernelset and studied their changes at 4,8,12,and 16 d after pollination(DAP).The status of the apicalkernels changed from abortion to set,showing a pattern similar to that of the middle kernels,withslow decrease in glucose and rapid decline in ethylene production,and a sharp increase in spermidineand spermine after four DAP.Synchronous changes in ethylene and spermidine were also observed.However,the ethylene production decreased slowly in the aborted apical kernels,the glucose andpolyamines concentrations were lower.Ethephon application did not block the change from abortion toset for the setting apical kernels.These data indicate that the developmental change may be accompaniedby an inhibition of adequate glucose to ethylene synthesis and subsequent promotion of spermidine andspermine synthesis,and adequate carbohydrate supply may play a key role in the developmental process.

  3. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of principal component analysis (PCA) and minimum noise fraction (MNF) analysis are applied to change detection in hyperspectral image (HyMap) data. The kernel versions are based on so-called Q-mode analysis in which the data enter into the analysis via inner products in the Gram...... the kernel function and then performing a linear analysis in that space. An example shows the successful application of (kernel PCA and) kernel MNF analysis to change detection in HyMap data covering a small agricultural area near Lake Waging-Taching, Bavaria, in Southern Germany. In the change detection...

  4. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    We consider kernel methods on general geodesic metric spaces and provide both negative and positive results. First we show that the common Gaussian kernel can only be generalized to a positive definite kernel on a geodesic metric space if the space is flat. As a result, for data on a Riemannian...... Laplacian kernel can be generalized while retaining positive definiteness. This implies that geodesic Laplacian kernels can be generalized to some curved spaces, including spheres and hyperbolic spaces. Our theoretical results are verified empirically....

  5. The pre-image problem in kernel methods.

    Science.gov (United States)

    Kwok, James Tin-yau; Tsang, Ivor Wai-hung

    2004-11-01

    In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.

  6. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  7. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear...... models to kernel learning, and means for restoring the generalizability in both kernel Principal Component Analysis and the Support Vector Machine are proposed. Viability is proved on a wide range of benchmark machine learning data sets....... as innerproducts in the model formulation. This dissertation presents research on improving the performance of standard kernel methods like kernel Principal Component Analysis and the Support Vector Machine. Moreover, the goal of the thesis has been two-fold. The first part focuses on the use of kernel Principal...

  8. Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.

    Science.gov (United States)

    Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan

    2016-11-01

    Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.

  9. Multiple Kernel Learning in Fisher Discriminant Analysis for Face Recognition

    Directory of Open Access Journals (Sweden)

    Xiao-Zhang Liu

    2013-02-01

    Full Text Available Recent applications and developments based on support vector machines (SVMs have shown that using multiple kernels instead of a single one can enhance classifier performance. However, there are few reports on performance of the kernel‐based Fisher discriminant analysis (kernel‐based FDA method with multiple kernels. This paper proposes a multiple kernel construction method for kernel‐based FDA. The constructed kernel is a linear combination of several base kernels with a constraint on their weights. By maximizing the margin maximization criterion (MMC, we present an iterative scheme for weight optimization. The experiments on the FERET and CMU PIE face databases show that, our multiple kernel Fisher discriminant analysis (MKFD achieves high recognition performance, compared with single‐kernel‐based FDA. The experiments also show that the constructed kernel relaxes parameter selection for kernel‐based FDA to some extent.

  10. A Novel Framework for Learning Geometry-Aware Kernels.

    Science.gov (United States)

    Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo

    2016-05-01

    The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.

  11. Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.

    Science.gov (United States)

    Wang, Shitong; Wang, Jun; Chung, Fu-lai

    2014-01-01

    Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.

  12. A Testbed of Parallel Kernels for Computer Science Research

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David; Demmel, James; Ibrahim, Khaled; Kaiser, Alex; Koniges, Alice; Madduri, Kamesh; Shalf, John; Strohmaier, Erich; Williams, Samuel

    2010-04-30

    1990s. The initial result of the more modern study was the seven dwarfs, which was subsequently extended to 13 motifs. These motifs have already been useful in defining classes of applications for architecture-software studies. However, these broad-brush problem statements often miss the nuance seen in individual kernels. For example, the computational requirements of particle methods vary greatly between the naive (but more accurate) direct calculations and the particle-mesh and particle-tree codes. Thus we commenced our study with an enumeration of problems, but then proceeded by providing not only reference implementations for each problem, but more importantly a mathematical definition that allows one to escape iterative approaches to software/hardware optimization. To ensure long term value, we have augmented each of our reference implementations with both a scalable problem generator and a verification scheme. In a paper we have prepared that documents our efforts, we describe in detail this process of problem definition, scalable input creation, verification, and implementation of reference codes for the scientific computing domain. Table 1 enumerates and describes the level of support we've developed for each kernel. We group these important kernels using the Berkeley dwarfs/motifs taxonomy using a red box in the appropriate column. As kernels become progressively complex, they build upon other, simpler computational methods. We note this dependency via orange boxes. After enumeration of the important numerical problems, we created a domain-appropriate high-level definition of each problem. To ensure future endeavors are not tainted by existing implementations, we specified the problem definition to be independent of both computer architecture and existing programming languages, models, and data types. Then, to provide context as to how such kernels productively map to existing architectures, languages and programming models, we produced reference

  13. Wilson Dslash Kernel From Lattice QCD Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  14. Learning Potential Energy Landscapes using Graph Kernels

    CERN Document Server

    Ferré, G; Barros, K

    2016-01-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab-initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. We show on a standard benchmark that our Graph Approximated Energy (GRAPE) method is competitive with state of the art kernel m...

  15. Viability Kernel for Ecosystem Management Models

    CERN Document Server

    Anaya, Eladio Ocana; Oliveros--Ramos, Ricardo; Tam, Jorge

    2009-01-01

    We consider sustainable management issues formulated within the framework of control theory. The problem is one of controlling a discrete--time dynamical system (e.g. population model) in the presence of state and control constraints, representing conflicting economic and ecological issues for instance. The viability kernel is known to play a basic role for the analysis of such problems and the design of viable control feedbacks, but its computation is not an easy task in general. We study the viability of nonlinear generic ecosystem models under preservation and production constraints. Under simple conditions on the growth rates at the boundary constraints, we provide an explicit description of the viability kernel. A numerical illustration is given for the hake--anchovy couple in the Peruvian upwelling ecosystem.

  16. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    of PCA and related techniques. An interesting dilemma in reduction of dimensionality of data is the desire to obtain simplicity for better understanding, visualization and interpretation of the data on the one hand, and the desire to retain sufficient detail for adequate representation on the other hand......Based on work by Pearson in 1901, Hotelling in 1933 introduced principal component analysis (PCA). PCA is often used for general feature generation and linear orthogonalization or compression by dimensionality reduction of correlated multivariate data, see Jolliffe for a comprehensive description...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  17. Quark-hadron duality: pinched kernel approch

    CERN Document Server

    Dominguez, C A; Schilcher, K; Spiesberger, H

    2016-01-01

    Hadronic spectral functions measured by the ALEPH collaboration in the vector and axial-vector channels are used to study potential quark-hadron duality violations (DV). This is done entirely in the framework of pinched kernel finite energy sum rules (FESR), i.e. in a model independent fashion. The kinematical range of the ALEPH data is effectively extended up to $s = 10\\; {\\mbox{GeV}^2}$ by using an appropriate kernel, and assuming that in this region the spectral functions are given by perturbative QCD. Support for this assumption is obtained by using $e^+ e^-$ annihilation data in the vector channel. Results in both channels show a good saturation of the pinched FESR, without further need of explicit models of DV.

  18. Analog Forecasting with Dynamics-Adapted Kernels

    CERN Document Server

    Zhao, Zhizhen

    2014-01-01

    Analog forecasting is a non-parametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from state-space reconstruction for dynamical systems and kernel methods developed in harmonic analysis and machine learning. The first improvement is to augment the dimension of the initial data using Takens' delay-coordinate maps to recover information in the initial data lost through partial observations. Then, instead of using Euclidean distances between the states, weighted ensembles of analogs are constructed according to similarity kernels in delay-coordinate space, featuring an explicit dependence on the dynamical vector field generating the data. The eigenvalues and eigenfunctions ...

  19. Searching and Indexing Genomic Databases via Kernelization

    Directory of Open Access Journals (Sweden)

    Travis eGagie

    2015-02-01

    Full Text Available The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper we survey the twenty-year history of this idea and discuss its relation to kernelization in parameterized complexity.

  20. Kernel based subspace projection of hyperspectral images

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten

    In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF). Th......). The MAF projection exploits the fact that interesting phenomena in images typically exhibit spatial autocorrelation. The analysis is based on nearinfrared hyperspectral images of maize grains demonstrating the superiority of the kernelbased MAF method....

  1. Wheat kernel dimensions: how do they contribute to kernel weight at an individual QTL level?

    Indian Academy of Sciences (India)

    Fa Cui; Anming Ding; Jun Li; Chunhua Zhao; Xingfeng Li; Deshun Feng; Xiuqin Wang; Lin Wang; Jurong Gao; Honggang Wang

    2011-12-01

    Kernel dimensions (KD) contribute greatly to thousand-kernel weight (TKW) in wheat. In the present study, quantitative trait loci (QTL) for TKW, kernel length (KL), kernel width (KW) and kernel diameter ratio (KDR) were detected by both conditional and unconditional QTL mapping methods. Two related F8:9 recombinant inbred line (RIL) populations, comprising 485 and 229 lines, respectively, were used in this study, and the trait phenotypes were evaluated in four environments. Unconditional QTL mapping analysis detected 77 additive QTL for four traits in two populations. Of these, 24 QTL were verified in at least three trials, and five of them were major QTL, thus being of great value for marker assisted selection in breeding programmes. Conditional QTL mapping analysis, compared with unconditional QTL mapping analysis, resulted in reduction in the number of QTL for TKW due to the elimination of TKW variations caused by its conditional traits; based on which we first dissected genetic control system involved in the synthetic process between TKW and KD at an individual QTL level. Results indicated that, at the QTL level, KW had the strongest influence on TKW, followed by KL, and KDR had the lowest level contribution to TKW. In addition, the present study proved that it is not all-inclusive to determine genetic relationships of a pairwise QTL for two related/causal traits based on whether they were co-located. Thus, conditional QTL mapping method should be used to evaluate possible genetic relationships of two related/causal traits.

  2. Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis

    Science.gov (United States)

    Woo, Gordon

    2017-04-01

    For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.

  3. Absolute Orientation Based on Distance Kernel Functions

    Directory of Open Access Journals (Sweden)

    Yanbiao Sun

    2016-03-01

    Full Text Available The classical absolute orientation method is capable of transforming tie points (TPs from a local coordinate system to a global (geodetic coordinate system. The method is based only on a unique set of similarity transformation parameters estimated by minimizing the total difference between all ground control points (GCPs and the fitted points. Nevertheless, it often yields a transformation with poor accuracy, especially in large-scale study cases. To address this problem, this study proposes a novel absolute orientation method based on distance kernel functions, in which various sets of similarity transformation parameters instead of only one set are calculated. When estimating the similarity transformation parameters for TPs using the iterative solution of a non-linear least squares problem, we assigned larger weighting matrices for the GCPs for which the distances from the point are short. The weighting matrices can be evaluated using the distance kernel function as a function of the distances between the GCPs and the TPs. Furthermore, we used the exponential function and the Gaussian function to describe distance kernel functions in this study. To validate and verify the proposed method, six synthetic and two real datasets were tested. The accuracy was significantly improved by the proposed method when compared to the classical method, although a higher computational complexity is experienced.

  4. Physicochemical Properties of Palm Kernel Oil

    Directory of Open Access Journals (Sweden)

    Amira P. Olaniyi

    2014-09-01

    Full Text Available Physicochemical analyses were carried out on palm kernel oil (Adin and the following results were obtained: Saponification value; 280.5±56.1 mgKOH/g, acid value; 2.7±0.3 mg KOH/g, Free Fatty Acid (FFA; 1.35±0.15 KOH/g, ester value; 277.8±56.4 mgKOH/g, peroxide value; 14.3±0.8 mEq/kg; iodine value; 15.86±4.02 mgKOH/g, Specific Gravity (S.G value; 0.904, refractive index; 1.412 and inorganic materials; 1.05%. Its odour and colour were heavy burnt smell and burnt brown, respectively. These values were compared with those obtained for groundnut and coconut oils. It was found that the physico-chemical properties of palm kernel oil are comparable to those of groundnut and coconut oils except for the peroxide value (i.e., 14.3±0.8 mEq which was not detectable in groundnut and coconut oils. Also the odour of both groundnut and coconut oils were pleasant while that of the palm kernel oil was not as pleasant (i.e., heavy burnt smell.

  5. Convolution kernels for multi-wavelength imaging

    CERN Document Server

    Boucaud, Alexandre; Abergel, Alain; Orieux, François; Dole, Hervé; Hadj-Youcef, Mohamed Amine

    2016-01-01

    Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as PSF, that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assumin...

  6. A Fast Reduced Kernel Extreme Learning Machine.

    Science.gov (United States)

    Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua

    2016-04-01

    In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.

  7. Kernel methods for phenotyping complex plant architecture.

    Science.gov (United States)

    Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien

    2014-02-07

    The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.

  8. Early Detection of Aspergillus parasiticus Infection in Maize Kernels Using Near-Infrared Hyperspectral Imaging and Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Xin Zhao

    2017-01-01

    Full Text Available Fungi infection in maize kernels is a major concern worldwide due to its toxic metabolites such as mycotoxins, thus it is necessary to develop appropriate techniques for early detection of fungi infection in maize kernels. Thirty-six sterilised maize kernels were inoculated each day with Aspergillus parasiticus from one to seven days, and then seven groups (D1, D2, D3, D4, D5, D6, D7 were determined based on the incubated time. Another 36 sterilised kernels without inoculation with fungi were taken as control (DC. Hyperspectral images of all kernels were acquired within spectral range of 921–2529 nm. Background, labels and bad pixels were removed using principal component analysis (PCA and masking. Separability computation for discrimination of fungal contamination levels indicated that the model based on the data of the germ region of individual kernels performed more effectively than on that of the whole kernels. Moreover, samples with a two-day interval were separable. Thus, four groups, DC, D1–2 (the group consisted of D1 and D2, D3–4 (D3 and D4, and D5–7 (D5, D6, and D7, were defined for subsequent classification. Two separate sample sets were prepared to verify the influence on a classification model caused by germ orientation, that is, germ up and the mixture of germ up and down with 1:1. Two smooth preprocessing methods (Savitzky-Golay smoothing, moving average smoothing and three scatter-correction methods (normalization, standard normal variate, and multiple scatter correction were compared, according to the performance of the classification model built by support vector machines (SVM. The best model for kernels with germ up showed the promising results with accuracies of 97.92% and 91.67% for calibration and validation data set, respectively, while accuracies of the best model for samples of the mixed kernels were 95.83% and 84.38%. Moreover, five wavelengths (1145, 1408, 1935, 2103, and 2383 nm were selected as the key

  9. Laguerre Kernels –Based SVM for Image Classification

    Directory of Open Access Journals (Sweden)

    Ashraf Afifi

    2014-01-01

    Full Text Available Support vector machines (SVMs have been promising methods for classification and regression analysis because of their solid mathematical foundations which convey several salient properties that other methods hardly provide. However the performance of SVMs is very sensitive to how the kernel function is selected, the challenge is to choose the kernel function for accurate data classification. In this paper, we introduce a set of new kernel functions derived from the generalized Laguerre polynomials. The proposed kernels could improve the classification accuracy of SVMs for both linear and nonlinear data sets. The proposed kernel functions satisfy Mercer’s condition and orthogonally properties which are important and useful in some applications when the support vector number is needed as in feature selection. The performance of the generalized Laguerre kernels is evaluated in comparison with the existing kernels. It was found that the choice of the kernel function, and the values of the parameters for that kernel are critical for a given amount of data. The proposed kernels give good classification accuracy in nearly all the data sets, especially those of high dimensions.

  10. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  11. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  12. Image quality of mixed convolution kernel in thoracic computed tomography.

    Science.gov (United States)

    Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar

    2016-11-01

    The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.

  13. MULTI-VIEW FACE DETECTION BASED ON KERNEL PRINCIPAL COMPONENT ANALYSIS AND KERNEL SUPPORT VECTOR TECHNIQUES

    Directory of Open Access Journals (Sweden)

    Muzhir Shaban Al-Ani

    2011-05-01

    Full Text Available Detecting faces across multiple views is more challenging than in a frontal view. To address this problem,an efficient approach is presented in this paper using a kernel machine based approach for learning suchnonlinear mappings to provide effective view-based representation for multi-view face detection. In thispaper Kernel Principal Component Analysis (KPCA is used to project data into the view-subspaces thencomputed as view-based features. Multi-view face detection is performed by classifying each input imageinto face or non-face class, by using a two class Kernel Support Vector Classifier (KSVC. Experimentalresults demonstrate successful face detection over a wide range of facial variation in color, illuminationconditions, position, scale, orientation, 3D pose, and expression in images from several photo collections.

  14. Comparing Alternative Kernels for the Kernel Method of Test Equating: Gaussian, Logistic, and Uniform Kernels. Research Report. ETS RR-08-12

    Science.gov (United States)

    Lee, Yi-Hsuan; von Davier, Alina A.

    2008-01-01

    The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…

  15. Kernels for Vector-Valued Functions: a Review

    CERN Document Server

    Alvarez, Mauricio A; Lawrence, Neil D

    2011-01-01

    Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional method...

  16. FRACTIONAL INTEGRALS WITH VARIABLE KERNELS ON HARDY SPACES

    Institute of Scientific and Technical Information of China (English)

    ZhangPu; DingYong

    2003-01-01

    The fractional integral operators with variable kernels are discussed.It is proved that if the kernel satisfies the Dini-condition,then the fractional integral operators with variable kernels are bounded from Hp(Rn) into Lq(Rn) when 0

  17. Approximating and learning by Lipschitz kernel on the sphere

    Institute of Scientific and Technical Information of China (English)

    CAO Fei-long; WANG Chang-miao

    2014-01-01

    This paper investigates some approximation properties and learning rates of Lips-chitz kernel on the sphere. A perfect convergence rate on the shifts of Lipschitz kernel on the sphere, which is faster than O(n-1/2), is obtained, where n is the number of parameters needed in the approximation. By means of the approximation, a learning rate of regularized least square algorithm with the Lipschitz kernel on the sphere is also deduced.

  18. Kernel based visual tracking with scale invariant features

    Institute of Scientific and Technical Information of China (English)

    Risheng Han; Zhongliang Jing; Yuanxiang Li

    2008-01-01

    The kernel based tracking has two disadvantages:the tracking window size cannot be adjusted efficiently,and the kernel based color distribution may not have enough ability to discriminate object from clutter background.FDr boosting up the feature's discriminating ability,both scale invariant features and kernel based color distribution features are used as descriptors of tracked object.The proposed algorithm can keep tracking object of varying scales even when the surrounding background is similar to the object's appearance.

  19. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  20. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  1. Reproducing wavelet kernel method in nonlinear system identification

    Institute of Scientific and Technical Information of China (English)

    WEN Xiang-jun; XU Xiao-ming; CAI Yun-ze

    2008-01-01

    By combining the wavelet decomposition with kernel method, a practical approach of universal multi-scale wavelet kernels constructed in reproducing kernel Hilbert space (RKHS) is discussed, and an identifica-tion scheme using wavelet support vector machines ( WSVM ) estimator is proposed for nonlinear dynamic sys-tems. The good approximating properties of wavelet kernel function enhance the generalization ability of the pro-posed method, and the comparison of some numerical experimental results between the novel approach and some existing methods is encouraging.

  2. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  3. A Kernel-based Account of Bibliometric Measures

    Science.gov (United States)

    Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji

    The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.

  4. Isolation of bacterial endophytes from germinated maize kernels.

    Science.gov (United States)

    Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja

    2007-06-01

    The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro.

  5. WAVELET KERNEL SUPPORT VECTOR MACHINES FOR SPARSE APPROXIMATION

    Institute of Scientific and Technical Information of China (English)

    Tong Yubing; Yang Dongkai; Zhang Qishan

    2006-01-01

    Wavelet, a powerful tool for signal processing, can be used to approximate the target function. For enhancing the sparse property of wavelet approximation, a new algorithm was proposed by using wavelet kernel Support Vector Machines (SVM), which can converge to minimum error with better sparsity. Here, wavelet functions would be firstly used to construct the admitted kernel for SVM according to Mercy theory; then new SVM with this kernel can be used to approximate the target funciton with better sparsity than wavelet approxiamtion itself. The results obtained by our simulation experiment show the feasibility and validity of wavelet kernel support vector machines.

  6. Convolution kernel design and efficient algorithm for sampling density correction.

    Science.gov (United States)

    Johnson, Kenneth O; Pipe, James G

    2009-02-01

    Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.

  7. 42 Variability Bugs in the Linux Kernel

    DEFF Research Database (Denmark)

    Abal, Iago; Brabrand, Claus; Wasowski, Andrzej

    2014-01-01

    Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research......, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we...

  8. 40 Variability Bugs in the Linux Kernel

    DEFF Research Database (Denmark)

    Abal Rivas, Iago; Brabrand, Claus; Wasowski, Andrzej

    2014-01-01

    is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 40 variability bugs collected from bug-fixing commits to the Linux kernel repository. We investigate each of the 40 bugs, recording......Feature-sensitive verification is a recent field that pursues the effective analysis of the exponential number of variants of a program family. Today researchers lack examples of concrete bugs induced by variability, and occurring in real large-scale software. Such a collection of bugs...

  9. Application of RBAC Model in System Kernel

    Directory of Open Access Journals (Sweden)

    Guan Keqing

    2012-11-01

    Full Text Available In the process of development of some technologies about Ubiquitous computing, the application of embedded intelligent devices is booming. Meanwhile, information security will face more serious threats than before. To improve the security of information terminal’s operation system, this paper analyzed the threats to system’s information security which comes from the abnormal operation by processes, and applied RBAC model into the safety management mechanism of operation system’s kernel. We built an access control model of system’s process, and proposed an implement framework. And the methods of implementation of the model for operation systems were illustrated.

  10. SVM multiuser detection based on heuristic kernel

    Institute of Scientific and Technical Information of China (English)

    Yang Tao; Hu Bo

    2007-01-01

    A support vector machine (SVM) based multiuser detection (MUD) scheme in code-division multiple-access (CDMA) system is proposed. In this scheme, the equivalent support vector (SV) is obtained through a kernel sparsity approximation algorithm, which avoids the conventional costly quadratic programming (QP) procedure in SVM. Besides, the coefficient of the SV is attained through the solution to a generalized eigenproblem. Simulation results show that the proposed scheme has almost the same bit error rate (BER) as the standard SVM and is better than minimum mean square error (MMSE) scheme. Meanwhile, it has a low computation complexity.

  11. Characterization of Flour from Avocado Seed Kernel

    OpenAIRE

    Macey A. Mahawan; Ma. Francia N. Tenorio; Jaycel A. Gomez; Rosenda A. Bronce

    2015-01-01

    The study focused on the Characterization of Flour from Avocado Seed Kernel. Based on the findings of the study the percentages of crude protein, crude fiber, crude fat, total carbohydrates, ash and moisture were 7.75, 4.91, 0.71, 74.65, 2.83 and 14.05 respectively. On the other hand the falling number was 495 seconds while gluten was below the detection limit of the method used. Moreover, the sensory evaluation in terms of color, texture and aroma in 0% proportion of Avocado seed flour was m...

  12. Fixed kernel regression for voltammogram feature extraction

    Science.gov (United States)

    Acevedo Rodriguez, F. J.; López-Sastre, R. J.; Gil-Jiménez, P.; Ruiz-Reyes, N.; Maldonado Bascón, S.

    2009-12-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals.

  13. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben;

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...

  14. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...

  15. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.

  16. Generalized Langevin equation with tempered memory kernel

    Science.gov (United States)

    Liemert, André; Sandev, Trifce; Kantz, Holger

    2017-01-01

    We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.

  17. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  18. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  19. Scientific Computing Kernels on the Cell Processor

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine

    2007-04-04

    The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.

  20. Scheduler Activations on BSD: Sharing Thread Management Between Kernel and Application

    OpenAIRE

    Small, Christopher A.; Seltzer, Margo I.

    1995-01-01

    There are two commonly used thread models: kernel level threads and user level threads. Kernel level threads suffer from the cost of frequent user-kernel domain crossings and fixed kernel scheduling priorities. User level threads are not integrated with the kernel, blocking all threads whenever one thread is blocked. The Scheduler Activations model, proposed by Anderson et al. [ANDE91], combines kernel CPU al location decisions with application control over thread scheduling. This paper discu...

  1. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  2. THE COLLOCATION METHODS FOR SINGULAR INTEGRAL EQUATIONS WITH CAUCHY KERNELS

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper applies the singular integral operators,singular quadrature operators and discretization matrices associated withsingular integral equations with Cauchy kernels, which are established in [1],to give a unified framework for various collocation methods of numericalsolutions of singular integral equations with Cauchy kernels. Under theframework, the coincidence of the direct quadrature method and the indirectquadrature method is very simple and obvious.

  3. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically ex

  4. KERNEL IDEALS AND CONGRUENCES ON MS-ALGEBRAS

    Institute of Scientific and Technical Information of China (English)

    Luo Congwen; Zeng Yanlu

    2006-01-01

    In this article, the authors describe the largest congruence induced by a kernel ideal of an MS-algebra and characterize those MS-algebras on which all the congruences are in a one-to-one correspondence with the kernel ideals.

  5. Integral Transform Methods: A Critical Review of Various Kernels

    Science.gov (United States)

    Orlandini, Giuseppina; Turro, Francesco

    2017-03-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  6. Resolvent kernel for the Kohn Laplacian on Heisenberg groups

    Directory of Open Access Journals (Sweden)

    Neur Eddine Askour

    2002-07-01

    Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.

  7. Integrating the Gradient of the Thin Wire Kernel

    Science.gov (United States)

    Champagne, Nathan J.; Wilton, Donald R.

    2008-01-01

    A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form

  8. ON APPROXIMATION BY SPHERICAL REPRODUCING KERNEL HILBERT SPACES

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The spherical approximation between two nested reproducing kernels Hilbert spaces generated from different smooth kernels is investigated. It is shown that the functions of a space can be approximated by that of the subspace with better smoothness. Furthermore, the upper bound of approximation error is given.

  9. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat is known for its very hard texture, which influences how it is milled and for what products it is well suited. We developed soft kernel durum wheat lines via Ph1b-mediated homoeologous recombination with Dr. Leonard Joppa...

  10. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    Science.gov (United States)

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  11. Comparison of Kernel Equating and Item Response Theory Equating Methods

    Science.gov (United States)

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  12. Integral transform methods: a critical review of various kernels

    CERN Document Server

    Orlandini, Giuseppina

    2016-01-01

    Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.

  13. Indigenous Methods in Preserving Bush Mango Kernels in Cameroon

    Directory of Open Access Journals (Sweden)

    Zac Tchoundjeu

    2005-01-01

    Full Text Available Traditional practices for preserving Irvingia wombolu and Irvingia gabonensis (bush mango kernels were assessed in a survey covering twelve villages (Dongo, Bouno, Gribi [East], Elig-Nkouma, Nkom I, Ngoumou [Centre], Bidjap, Nko’ovos, Ondodo [South], Besong-Abang, Ossing and Kembong [Southwest], in the humid lowland forest zone of Cameroon. All the interviewed households that own trees of species were found to preserve kernels in periods of abundance, excluding Elig-Nkouma (87.5%. Eighty nine and 85% did so in periods of scarcity for I. wombolu and I. gabonensis respectively. Seventeen and twenty-nine kernel preservation practices were recorded for I. wombolu and I. gabonensis respectively. Most were based on continuous heating of the kernels or kernel by-products (cakes. The most commonly involved keeping the sun-dried kernels in a plastic bag on a bamboo rack hung above the fireplace in the kitchen. A 78% of interviews households reported preserving I. wombolu kernels for less than one year while 22% preserved it for more than one year with 1.9% for two years, the normal length of the off-season period for trees in the wild. Cakes wrapped with leaves and kept on a bamboo rack hung over the fireplace were reported by households in the East and South provinces to store Irvingia gabonensis longer (more than one year. Further studies on the utilization of heat for preserving and canning bush mango kernels are recommended.

  14. THE HEAT KERNEL ON THE CAYLEY HEISENBERG GROUP

    Institute of Scientific and Technical Information of China (English)

    Luan Jingwen; Zhu Fuliu

    2005-01-01

    The authors obtain an explicit expression of the heat kernel for the Cayley Heisenberg group of order n by using the stochastic integral method of Gaveau. Apart from the standard Heisenberg group and the quaternionic Heisenberg group, this is the only nilpotent Lie group on which an explicit formula for the heat kernel has been obtained.

  15. Oven-drying reduces ruminal starch degradation in maize kernels

    NARCIS (Netherlands)

    Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.

    2014-01-01

    The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels (consist

  16. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...

  17. Efficient Kernel-based 2DPCA for Smile Stages Recognition

    Directory of Open Access Journals (Sweden)

    Fitri Damayanti

    2012-03-01

    Full Text Available Recently, an approach called two-dimensional principal component analysis (2DPCA has been proposed for smile stages representation and recognition. The essence of 2DPCA is that it computes the eigenvectors of the so-called image covariance matrix without matrix-to-vector conversion so the size of the image covariance matrix are much smaller, easier to evaluate covariance matrix, computation cost is reduced and the performance is also improved than traditional PCA. In an effort to improve and perfect the performance of smile stages recognition, in this paper, we propose efficient Kernel based 2DPCA concepts. The Kernelization of 2DPCA can be benefit to develop the nonlinear structures in the input data. This paper discusses comparison of standard Kernel based 2DPCA and efficient Kernel based 2DPCA for smile stages recognition. The results of experiments show that Kernel based 2DPCA achieve better performance in comparison with the other approaches. While the use of efficient Kernel based 2DPCA can speed up the training procedure of standard Kernel based 2DPCA thus the algorithm can achieve much more computational efficiency and remarkably save the memory consuming compared to the standard Kernel based 2DPCA.

  18. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    ) dimensional feature space via the kernel function and then performing a linear analysis in that space. Three examples show the very successful application of kernel MAF/MNF analysis to 1) change detection in DLR 3K camera data recorded 0.7 seconds apart over a busy motorway, 2) change detection...

  19. Oven-drying reduces ruminal starch degradation in maize kernels

    NARCIS (Netherlands)

    Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.

    2014-01-01

    The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels

  20. Lp-boundedness of flag kernels on homogeneous groups

    CERN Document Server

    Glowacki, Pawel

    2010-01-01

    We prove that the flag kernel singular integral operators of Nagel-Ricci-Stein on a homogeneous group are bounded on the Lp spaces. The gradation associated with the kernels is the natural gradation of the underlying Lie algebra. Our main tools are the Littlewood-Paley theory and a symbolic calculus combined in the spirit of Duoandikoetxea and Rubio de Francia.

  1. Extracting Feature Model Changes from the Linux Kernel Using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2014-01-01

    The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically

  2. Gaussian kernel operators on white noise functional spaces

    Institute of Scientific and Technical Information of China (English)

    骆顺龙; 严加安

    2000-01-01

    The Gaussian kernel operators on white noise functional spaces, including second quantization, Fourier-Mehler transform, scaling, renormalization, etc. are studied by means of symbol calculus, and characterized by the intertwining relations with annihilation and creation operators. The infinitesimal generators of the Gaussian kernel operators are second order white noise operators of which the number operator and the Gross Laplacian are particular examples.

  3. Evolutionary optimization of kernel weights improves protein complex comembership prediction.

    Science.gov (United States)

    Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick

    2009-01-01

    In recent years, more and more high-throughput data sources useful for protein complex prediction have become available (e.g., gene sequence, mRNA expression, and interactions). The integration of these different data sources can be challenging. Recently, it has been recognized that kernel-based classifiers are well suited for this task. However, the different kernels (data sources) are often combined using equal weights. Although several methods have been developed to optimize kernel weights, no large-scale example of an improvement in classifier performance has been shown yet. In this work, we employ an evolutionary algorithm to determine weights for a larger set of kernels by optimizing a criterion based on the area under the ROC curve. We show that setting the right kernel weights can indeed improve performance. We compare this to the existing kernel weight optimization methods (i.e., (regularized) optimization of the SVM criterion or aligning the kernel with an ideal kernel) and find that these do not result in a significant performance improvement and can even cause a decrease in performance. Results also show that an expert approach of assigning high weights to features with high individual performance is not necessarily the best strategy.

  4. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard

    2011-01-01

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus......, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging....

  5. A Kernel Time Structure Independent Component Analysis Method for Nonlinear Process Monitoring☆

    Institute of Scientific and Technical Information of China (English)

    Lianfang Cai; Xuemin Tian; Ni Zhang

    2014-01-01

    Kernel independent component analysis (KICA) is a newly emerging nonlinear process monitoring method, which can extract mutually independent latent variables cal ed independent components (ICs) from process var-iables. However, when more than one IC have Gaussian distribution, it cannot extract the IC feature effectively and thus its monitoring performance will be degraded drastical y. To solve such a problem, a kernel time struc-ture independent component analysis (KTSICA) method is proposed for monitoring nonlinear process in this paper. The original process data are mapped into a feature space nonlinearly and then the whitened data are calculated in the feature space by the kernel trick. Subsequently, a time structure independent component analysis algorithm, which has no requirement for the distribution of ICs, is proposed to extract the IC feature. Finally, two monitoring statistics are built to detect process faults. When some fault is detected, a nonlinear fault identification method is developed to identify fault variables based on sensitivity analysis. The proposed monitoring method is applied in the Tennessee Eastman benchmark process. Applications demonstrate the superiority of KTSICA over KICA.

  6. Projection of fMRI data onto the cortical surface using anatomically-informed convolution kernels.

    Science.gov (United States)

    Operto, G; Bulot, R; Anton, J-L; Coulon, O

    2008-01-01

    As surface-based data analysis offer an attractive approach for intersubject matching and comparison, the projection of voxel-based 3D volumes onto the cortical surface is an essential problem. We present here a method that aims at producing representations of functional brain data on the cortical surface from functional MRI volumes. Such representations are for instance required for subsequent cortical-based functional analysis. We propose a projection technique based on the definition, around each node of the gray/white matter interface mesh, of convolution kernels whose shape and distribution rely on the geometry of the local anatomy. For one anatomy, a set of convolution kernels is computed that can be used to project any functional data registered with this anatomy. Therefore resulting in anatomically-informed projections of data onto the cortical surface, this kernel-based approach offers better sensitivity, specificity than other classical methods and robustness to misregistration errors. Influences of mesh and volumes spatial resolutions were also estimated for various projection techniques, using simulated functional maps.

  7. A constructive approach for discovering new drug leads: Using a kernel methodology for the inverse-QSAR problem

    Directory of Open Access Journals (Sweden)

    Wong William WL

    2009-04-01

    Full Text Available Abstract Background The inverse-QSAR problem seeks to find a new molecular descriptor from which one can recover the structure of a molecule that possess a desired activity or property. Surprisingly, there are very few papers providing solutions to this problem. It is a difficult problem because the molecular descriptors involved with the inverse-QSAR algorithm must adequately address the forward QSAR problem for a given biological activity if the subsequent recovery phase is to be meaningful. In addition, one should be able to construct a feasible molecule from such a descriptor. The difficulty of recovering the molecule from its descriptor is the major limitation of most inverse-QSAR methods. Results In this paper, we describe the reversibility of our previously reported descriptor, the vector space model molecular descriptor (VSMMD based on a vector space model that is suitable for kernel studies in QSAR modeling. Our inverse-QSAR approach can be described using five steps: (1 generate the VSMMD for the compounds in the training set; (2 map the VSMMD in the input space to the kernel feature space using an appropriate kernel function; (3 design or generate a new point in the kernel feature space using a kernel feature space algorithm; (4 map the feature space point back to the input space of descriptors using a pre-image approximation algorithm; (5 build the molecular structure template using our VSMMD molecule recovery algorithm. Conclusion The empirical results reported in this paper show that our strategy of using kernel methodology for an inverse-Quantitative Structure-Activity Relationship is sufficiently powerful to find a meaningful solution for practical problems.

  8. Evaluation of sintering effects on SiC-incorporated UO{sub 2} kernels under Ar and Ar–4%H{sub 2} environments

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Chinthaka M., E-mail: silvagw@ornl.gov [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee TN 37831-6223 (United States); Materials Science and Engineering, The University of Tennessee Knoxville, TN 37996-2100, United States. (United States); Lindemer, Terrence B.; Hunt, Rodney D.; Collins, Jack L.; Terrani, Kurt A.; Snead, Lance L. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee TN 37831-6223 (United States)

    2013-11-15

    Silicon carbide (SiC) is suggested as an oxygen getter in UO{sub 2} kernels used for tristructural isotropic (TRISO) particle fuels and to prevent kernel migration during irradiation. Scanning electron microscopy and X-ray diffractometry analyses performed on sintered kernels verified that an internal gelation process can be used to incorporate SiC in UO{sub 2} fuel kernels. Even though the presence of UC in either argon (Ar) or Ar–4%H{sub 2} sintered samples suggested a lowering of the SiC up to 3.5–1.4 mol%, respectively, the presence of other silicon-related chemical phases indicates the preservation of silicon in the kernels during sintering process. UC formation was presumed to occur by two reactions. The first was by the reaction of SiC with its protective SiO{sub 2} oxide layer on SiC grains to produce volatile SiO and free carbon that subsequently reacted with UO{sub 2} to form UC. The second process was direct UO{sub 2} reaction with SiC grains to form SiO, CO, and UC. A slightly higher density and UC content were observed in the sample sintered in Ar–4%H{sub 2}, but both atmospheres produced kernels with ∼95% of theoretical density. It is suggested that incorporating CO in the sintering gas could prevent UC formation and preserve the initial SiC content.

  9. Hypothesis testing using pairwise distances and associated kernels

    CERN Document Server

    Sejdinovic, Dino; Sriperumbudur, Bharath; Fukumizu, Kenji

    2012-01-01

    We provide a unifying framework linking two classes of statistics used in two-sample and independence testing: on the one hand, the energy distances and distance covariances from the statistics literature; on the other, distances between embeddings of distributions to reproducing kernel Hilbert spaces (RKHS), as established in machine learning. The equivalence holds when energy distances are computed with semimetrics of negative type, in which case a kernel may be defined such that the RKHS distance between distributions corresponds exactly to the energy distance. We determine the class of probability distributions for which kernels induced by semimetrics are characteristic (that is, for which embeddings of the distributions to an RKHS are injective). Finally, we investigate the performance of this family of kernels in two-sample and independence tests: we show in particular that the energy distance most commonly employed in statistics is just one member of a parametric family of kernels, and that other choic...

  10. An iterative modified kernel based on training data

    Institute of Scientific and Technical Information of China (English)

    Zhi-xiang ZHOU; Feng-qing HAN

    2009-01-01

    To improve performance of a support vector regression, a new method for a modified kernel function is proposed. In this method, information of all samples is included in the kernel function with conformal mapping. Thus the kernel function is data-dependent. With a random initial parameter, the kernel function is modified re-peatedly until a satisfactory result is achieved. Compared with the conventional model, the improved approach does not need to select parameters of the kernel function. Sim-ulation is carried out for the one-dimension continuous function and a case of strong earthquakes. The results show that the improved approach has better learning ability and forecasting precision than the traditional model. With the increase of the iteration number, the figure of merit decreases and converges. The speed of convergence depends on the parameters used in the algorithm.

  11. Nonlinear Statistical Process Monitoring and Fault Detection Using Kernel ICA

    Institute of Scientific and Technical Information of China (English)

    ZHANG Xi; YAN Wei-wu; ZHAO Xu; SHAO Hui-he

    2007-01-01

    A novel nonlinear process monitoring and fault detection method based on kernel independent component analysis (ICA) is proposed. The kernel ICA method is a two-phase algorithm: whitened kernel principal component (KPCA) plus ICA. KPCA spheres data and makes the data structure become as linearly separable as possible by virtue of an implicit nonlinear mapping determined by kernel. ICA seeks the projection directions in the KPCA whitened space, making the distribution of the projected data as non-gaussian as possible. The application to the fluid catalytic cracking unit (FCCU) simulated process indicates that the proposed process monitoring method based on kernel ICA can effectively capture the nonlinear relationship in process variables. Its performance significantly outperforms monitoring method based on ICA or KPCA.

  12. Anatomically-aided PET reconstruction using the kernel method

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi

    2016-09-01

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  13. OSKI: A Library of Automatically Tuned Sparse Matrix Kernels

    Energy Technology Data Exchange (ETDEWEB)

    Vuduc, R; Demmel, J W; Yelick, K A

    2005-07-19

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  14. OSKI: A library of automatically tuned sparse matrix kernels

    Science.gov (United States)

    Vuduc, Richard; Demmel, James W.; Yelick, Katherine A.

    2005-01-01

    The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decisionmaking process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.

  15. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    CERN Document Server

    Frontiere, Nicholas; Owen, J Michael

    2016-01-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that employs a first-order consistent reproducing kernel function, exactly interpolating linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, momentum, and energy are all manifestly conserved without any assumption about kernel symmetries. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains the benefits of traditional SPH methods (such as preserving Galilean invariance and manif...

  16. A novel extended kernel recursive least squares algorithm.

    Science.gov (United States)

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  17. Virtual screening with support vector machines and structure kernels

    CERN Document Server

    Mahé, Pierre

    2007-01-01

    Support vector machines and kernel methods have recently gained considerable attention in chemoinformatics. They offer generally good performance for problems of supervised classification or regression, and provide a flexible and computationally efficient framework to include relevant information and prior knowledge about the data and problems to be handled. In particular, with kernel methods molecules do not need to be represented and stored explicitly as vectors or fingerprints, but only to be compared to each other through a comparison function technically called a kernel. While classical kernels can be used to compare vector or fingerprint representations of molecules, completely new kernels were developed in the recent years to directly compare the 2D or 3D structures of molecules, without the need for an explicit vectorization step through the extraction of molecular descriptors. While still in their infancy, these approaches have already demonstrated their relevance on several toxicity prediction and s...

  18. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, R.

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  19. Modeling DNA affinity landscape through two-round support vector regression with weighted degree kernels

    KAUST Repository

    Wang, Xiaolei

    2014-12-12

    Background: A quantitative understanding of interactions between transcription factors (TFs) and their DNA binding sites is key to the rational design of gene regulatory networks. Recent advances in high-throughput technologies have enabled high-resolution measurements of protein-DNA binding affinity. Importantly, such experiments revealed the complex nature of TF-DNA interactions, whereby the effects of nucleotide changes on the binding affinity were observed to be context dependent. A systematic method to give high-quality estimates of such complex affinity landscapes is, thus, essential to the control of gene expression and the advance of synthetic biology. Results: Here, we propose a two-round prediction method that is based on support vector regression (SVR) with weighted degree (WD) kernels. In the first round, a WD kernel with shifts and mismatches is used with SVR to detect the importance of subsequences with different lengths at different positions. The subsequences identified as important in the first round are then fed into a second WD kernel to fit the experimentally measured affinities. To our knowledge, this is the first attempt to increase the accuracy of the affinity prediction by applying two rounds of string kernels and by identifying a small number of crucial k-mers. The proposed method was tested by predicting the binding affinity landscape of Gcn4p in Saccharomyces cerevisiae using datasets from HiTS-FLIP. Our method explicitly identified important subsequences and showed significant performance improvements when compared with other state-of-the-art methods. Based on the identified important subsequences, we discovered two surprisingly stable 10-mers and one sensitive 10-mer which were not reported before. Further test on four other TFs in S. cerevisiae demonstrated the generality of our method. Conclusion: We proposed in this paper a two-round method to quantitatively model the DNA binding affinity landscape. Since the ability to modify

  20. Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan

    2016-11-01

    In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.

  1. The Palomar Kernel Phase Experiment: Testing Kernel Phase Interferometry for Ground-based Astronomical Observations

    CERN Document Server

    Pope, Benjamin; Hinkley, Sasha; Ireland, Michael J; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D; Martinache, Frantz

    2015-01-01

    At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently-developed adaptive optics post-processing technique, called kernel phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the alpha Ophiuchi binary system near periastron, using the Palomar 200-Inch Telesco...

  2. Heat kernel method and its applications

    CERN Document Server

    Avramidi, Ivan G

    2015-01-01

    The heart of the book is the development of a short-time asymptotic expansion for the heat kernel. This is explained in detail and explicit examples of some advanced calculations are given. In addition some advanced methods and extensions, including path integrals, jump diffusion and others are presented. The book consists of four parts: Analysis, Geometry, Perturbations and Applications. The first part shortly reviews of some background material and gives an introduction to PDEs. The second part is devoted to a short introduction to various aspects of differential geometry that will be needed later. The third part and heart of the book presents a systematic development of effective methods for various approximation schemes for parabolic differential equations. The last part is devoted to applications in financial mathematics, in particular, stochastic differential equations. Although this book is intended for advanced undergraduate or beginning graduate students in, it should also provide a useful reference ...

  3. Image Processing Variations with Analytic Kernels

    CERN Document Server

    Garnett, John B; Vese, Luminita A

    2012-01-01

    Let $f\\in L^1(\\R^d)$ be real. The Rudin-Osher-Fatemi model is to minimize $\\|u\\|_{\\dot{BV}}+\\lambda\\|f-u\\|_{L^2}^2$, in which one thinks of $f$ as a given image, $\\lambda > 0$ as a "tuning parameter", $u$ as an optimal "cartoon" approximation to $f$, and $f-u$ as "noise" or "texture". Here we study variations of the R-O-F model having the form $\\inf_u\\{\\|u\\|_{\\dot{BV}}+\\lambda \\|K*(f-u)\\|_{L^p}^q\\}$ where $K$ is a real analytic kernel such as a Gaussian. For these functionals we characterize the minimizers $u$ and establish several of their properties, including especially their smoothness properties. In particular we prove that on any open set on which $u \\in W^{1,1}$ and $\

  4. The Dynamical Kernel Scheduler - Part 1

    CERN Document Server

    Adelmann, Andreas; Suter, Andreas

    2015-01-01

    Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software using these hardware accelerators introduces additional challenges for the developer such as exposing additional parallelism, dealing with different hardware designs and using multiple development frameworks in order to use devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between host application and different hardware accelerators. DKS handles the communication between the host and device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated in OPAL (Object-or...

  5. Online Learning of Noisy Data with Kernels

    CERN Document Server

    Cesa-Bianchi, Nicolò; Shamir, Ohad

    2010-01-01

    We study online learning when individual instances are corrupted by random noise. We assume the noise distribution is unknown, and may change over time with no restriction other than having zero mean and bounded variance. Our technique relies on a family of unbiased estimators for non-linear functions, which may be of independent interest. We show that a variant of online gradient descent can learn functions in any dot-product (e.g., polynomial) or Gaussian kernel space with any analytic convex loss function. Our variant uses randomized estimates that need to query a random number of noisy copies of each instance, where with high probability this number is upper bounded by a constant. Allowing such multiple queries cannot be avoided: Indeed, we show that online learning is in general impossible when only one noisy copy of each instance can be accessed.

  6. Index-free Heat Kernel Coefficients

    CERN Document Server

    De van Ven, A E M

    1998-01-01

    Using index-free notation, we present the diagonal values of the first five heat kernel coefficients associated with a general Laplace-type operator on a compact Riemannian space without boundary. The fifth coefficient appears here for the first time. For a flat space with a gauge connection, the sixth coefficient is given too. Also provided are the leading terms for any coefficient, both in ascending and descending powers of the Yang-Mills and Riemann curvatures, to the same order as required for the fourth coefficient. These results are obtained by directly solving the relevant recursion relations, working in Fock-Schwinger gauge and Riemann normal coordinates. Our procedure is thus noncovariant, but we show that for any coefficient the `gauged' respectively `curved' version is found from the corresponding `non-gauged' respectively `flat' coefficient by making some simple covariant substitutions. These substitutions being understood, the coefficients retain their `flat' form and size. In this sense the fift...

  7. Kernel density estimation using graphical processing unit

    Science.gov (United States)

    Sunarko, Su'ud, Zaki

    2015-09-01

    Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.

  8. Learning RoboCup-Keepaway with Kernels

    CERN Document Server

    Jung, Tobias

    2012-01-01

    We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the high-dimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obta...

  9. Heat kernel measures on random surfaces

    CERN Document Server

    Klevtsov, Semyon

    2015-01-01

    The heat kernel on the symmetric space of positive definite Hermitian matrices is used to endow the spaces of Bergman metrics of degree k on a Riemann surface M with a family of probability measures depending on a choice of the background metric. Under a certain matrix-metric correspondence, each positive definite Hermitian matrix corresponds to a Kahler metric on M. The one and two point functions of the random metric are calculated in a variety of limits as k and t tend to infinity. In the limit when the time t goes to infinity the fluctuations of the random metric around the background metric are the same as the fluctuations of random zeros of holomorphic sections. This is due to the fact that the random zeros form the boundary of the space of Bergman metrics.

  10. Pattern Programmable Kernel Filter for Bot Detection

    Directory of Open Access Journals (Sweden)

    Kritika Govind

    2012-05-01

    Full Text Available Bots earn their unique name as they perform a wide variety of automated task. These tasks include stealing sensitive\tuser\tinformation. Detection of bots using solutions such as behavioral\tcorrelation of\tflow\trecords,\tgroup activity\tin DNS traffic, observing\tthe periodic repeatability in\tcommunication, etc., lead to monitoring\tthe network traffic\tand\tthen\tclassifying them as Bot or normal traffic.\tOther solutions for\tBot detection\tinclude kernel level key stroke verification, system call initialization,\tIP black listing, etc. In the first\ttwo solutions\tthere is no assurance\tthat\tthe\tpacket carrying user information is prevented from being sent to the attacker and the latter suffers from the problem of\tIP spoofing. This motivated\tus to think\tof a solution that would\tfilter\tout\tthe malicious\tpackets\tbefore being\tput onto\tthe network. To come out with such a\tsolution,\ta real time\tbot\tattack\twas\tgenerated with SpyEye Exploit kit and traffic\tcharacteristics were analyzed. The analysis revealed the existence\tof a unique repeated communication\tbetween\tthe Zombie machine\tand\tthe botmaster. This motivated us to propose, a Pattern\tProgrammable Kernel\tFilter (PPKF\tfor filtering out the malicious\tpackets generated by bots.\tPPKF was developed\tusing the\twindows\tfiltering platform (WFP filter engine.\tPPKF was programmed to\tfilter\tout\tthe\tpackets\twith\tunique pattern which were observed\tfrom\tthe\tbot\tattack experiments. Further\tPPKF was found\tto completely suppress the\tflow\tof packets having the programmed uniqueness in them thus preventing the functioning of bots in terms of user information being sent to the Botmaster.

  11. Associative morphological memories based on variations of the kernel and dual kernel methods.

    Science.gov (United States)

    Sussner, Peter

    2003-01-01

    Morphological associative memories (MAMs) belong to the class of morphological neural networks. The recording scheme used in the original MAM models is similar to the correlation recording recipe. Recording is achieved by means of a maximum (MXY model) or minimum (WXY model) of outer products. Notable features of autoassociative morphological memories (AMMs) include optimal absolute storage capacity and one-step convergence. Heteroassociative morphological memories (HMMs) do not have these properties and are not very well understood. The fixed points of AMMs can be characterized exactly in terms of the original patterns. Unfortunately, AMM fixed points include a large number of spurious memories. In this paper, we combine the MXX model and variations of the kernel method to produce new autoassociative and heteroassociative memories. We also introduce a dual kernel method. A new, dual model is given by a combination of the WXX model and a variation of the dual kernel method. The new MAM models exhibit better error correction capabilities than MXX and WXX and a reduced number of spurious memories which can be easily described in terms of the fundamental memories.

  12. Classification of maize kernels using NIR hyperspectral imaging.

    Science.gov (United States)

    Williams, Paul J; Kucheryavskiy, Sergey

    2016-10-15

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual kernels and did not give acceptable results because of high misclassification. However by using a predefined threshold and classifying entire kernels based on the number of correctly predicted pixels, improved results were achieved (sensitivity and specificity of 0.75 and 0.97). Object-wise classification was performed using two methods for feature extraction - score histograms and mean spectra. The model based on score histograms performed better for hard kernel classification (sensitivity and specificity of 0.93 and 0.97), while that of mean spectra gave better results for medium kernels (sensitivity and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale.

  13. Gaussian kernel width optimization for sparse Bayesian learning.

    Science.gov (United States)

    Mohsenzadeh, Yalda; Sheikhzadeh, Hamid

    2015-04-01

    Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.

  14. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.

  15. Spectrum-based kernel length estimation for Gaussian process classification.

    Science.gov (United States)

    Wang, Liang; Li, Chuan

    2014-06-01

    Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.

  16. Relaxation and diffusion models with non-singular kernels

    Science.gov (United States)

    Sun, HongGuang; Hao, Xiaoxiao; Zhang, Yong; Baleanu, Dumitru

    2017-02-01

    Anomalous relaxation and diffusion processes have been widely quantified by fractional derivative models, where the definition of the fractional-order derivative remains a historical debate due to its limitation in describing different kinds of non-exponential decays (e.g. stretched exponential decay). Meanwhile, many efforts by mathematicians and engineers have been made to overcome the singularity of power function kernel in its definition. This study first explores physical properties of relaxation and diffusion models where the temporal derivative was defined recently using an exponential kernel. Analytical analysis shows that the Caputo type derivative model with an exponential kernel cannot characterize non-exponential dynamics well-documented in anomalous relaxation and diffusion. A legitimate extension of the previous derivative is then proposed by replacing the exponential kernel with a stretched exponential kernel. Numerical tests show that the Caputo type derivative model with the stretched exponential kernel can describe a much wider range of anomalous diffusion than the exponential kernel, implying the potential applicability of the new derivative in quantifying real-world, anomalous relaxation and diffusion processes.

  17. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  18. Widely Linear Complex-Valued Kernel Methods for Regression

    Science.gov (United States)

    Boloix-Tortosa, Rafael; Murillo-Fuentes, Juan Jose; Santos, Irene; Perez-Cruz, Fernando

    2017-10-01

    Usually, complex-valued RKHS are presented as an straightforward application of the real-valued case. In this paper we prove that this procedure yields a limited solution for regression. We show that another kernel, here denoted as pseudo kernel, is needed to learn any function in complex-valued fields. Accordingly, we derive a novel RKHS to include it, the widely RKHS (WRKHS). When the pseudo-kernel cancels, WRKHS reduces to complex-valued RKHS of previous approaches. We address the kernel and pseudo-kernel design, paying attention to the kernel and the pseudo-kernel being complex-valued. In the experiments included we report remarkable improvements in simple scenarios where real a imaginary parts have different similitude relations for given inputs or cases where real and imaginary parts are correlated. In the context of these novel results we revisit the problem of non-linear channel equalization, to show that the WRKHS helps to design more efficient solutions.

  19. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  20. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  1. Inverse Integral Kernel for Diffusion in a Harmonic Potential

    Science.gov (United States)

    Kosugi, Taichi

    2014-05-01

    The inverse integral kernel for diffusion in a harmonic potential of an overdamped Brownian particle is derived in the present study. It is numerically demonstrated that a sufficiently large number of polynomials for the calculation of the inverse integral kernel are needed for the accurate reproduction of a probability distribution function at past. The inverse integral kernel derived can be used around each of the minima of a generic potential, provided that the lifetimes of the population in the neighboring higher wells are much longer than the negative time lapse.

  2. Non-Rigid Object Tracking by Anisotropic Kernel Mean Shift

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Mean shift, an iterative procedure that shifts each data point to the average of data points in its neighborhood, has been applied to object tracker. However, the traditional mean shift tracker by isotropic kernel often loses the object with the changing object structure in video sequences, especially when the object structure varies fast. This paper proposes a non-rigid object tracker by anisotropic kernel mean shift in which the shape, scale, and orientation of the kernels adapt to the changing object structure. The experimental results show that the new tracker is self-adaptive and approximately twice faster than the traditional tracker, which ensures the robustness and real time of tracking.

  3. Bergman kernel function on Hua construction of the fourth type

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    This paper introduces the Hua construction and presents the holomorphic automorphism group of the Hua construction of the fourth type. Utilizing the Bergman kernel function, under the condition of holomorphic automorphism and the standard complete orthonormal system of the semi-Reinhardt domain, the infinite series form of the Bergman kernel function is derived. By applying the properties of polynomial and Γ functions, various identification relations of the aforementioned form are developed and the explicit formula of the Bergman kernel function for the Hua construction of the fourth type is obtained, which suggest that many of the previously-reported results are only the special cases of our findings.

  4. The Bergman kernel function of some Reinhardt domains (Ⅱ)

    Institute of Scientific and Technical Information of China (English)

    龚昇; 郑学安

    2000-01-01

    The boundary behavior of the Bergman kernel function of a kind of Reinhardt domain is studied. Upper and lower bounds for the Bergman kernel function are found at the diagonal points ( Z, Z) Let Q be the Reinhardt domainwhere is the Standard Euclidean norm in and let K( Z, W) be the Bergman kernel function of Ω. Then there exist two positive constants m and M, and a function F such thatholds for every Z∈Ω . Hereand is the defining function of Ω The constants m and M depend only on Ω = This result extends some previous known results.

  5. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...

  6. The Bergman Kernels on Generalized Exceptional Hua Domains

    Institute of Scientific and Technical Information of China (English)

    殷慰萍; 赵振刚

    2001-01-01

    @@Yin Weiping introduce four types of Hua domain which are built on four types of Cartan domain and the Bergman kernels on these four types of Hua domain can be computed in explicit formulas[1]. In this paper, two types of domains defined by (10), (11) (see below) are introduced which are built on two exceptional Cartan domains. And We compute Bergman Kernels explicitly for these two domains. We also study the asymptotic behavior of the Bergman kernel function near boundary points, drawing on Appell's multivariable hypergeometric function.

  7. Flour quality and kernel hardness connection in winter wheat

    Directory of Open Access Journals (Sweden)

    Szabó B. P.

    2016-12-01

    Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.

  8. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    code exists which allows for fast data exploration and experimentation with smaller datasets. Computationally demanding kernelization of test data with training data and kernel image projections have been programmed to run on massively parallel CUDA-enabled graphics processors, when available, giving...... the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written which function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. Also, Matlab...

  9. FUZZY PRINCIPAL COMPONENT ANALYSIS AND ITS KERNEL BASED MODEL

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Principal Component Analysis (PCA) is one of the most important feature extraction methods, and Kernel Principal Component Analysis (KPCA) is a nonlinear extension of PCA based on kernel methods. In real world, each input data may not be fully assigned to one class and it may partially belong to other classes. Based on the theory of fuzzy sets, this paper presents Fuzzy Principal Component Analysis (FPCA) and its nonlinear extension model, i.e., Kernel-based Fuzzy Principal Component Analysis (KFPCA). The experimental results indicate that the proposed algorithms have good performances.

  10. Mercer Kernel Based Fuzzy Clustering Self-Adaptive Algorithm

    Institute of Scientific and Technical Information of China (English)

    李侃; 刘玉树

    2004-01-01

    A novel mercer kernel based fuzzy clustering self-adaptive algorithm is presented. The mercer kernel method is introduced to the fuzzy c-means clustering. It may map implicitly the input data into the high-dimensional feature space through the nonlinear transformation. Among other fuzzy c-means and its variants, the number of clusters is first determined. A self-adaptive algorithm is proposed. The number of clusters, which is not given in advance, can be gotten automatically by a validity measure function. Finally, experiments are given to show better performance with the method of kernel based fuzzy c-means self-adaptive algorithm.

  11. Composition Formulas of Bessel-Struve Kernel Function

    Directory of Open Access Journals (Sweden)

    K. S. Nisar

    2016-01-01

    Full Text Available The object of this paper is to study and develop the generalized fractional calculus operators involving Appell’s function F3(· due to Marichev-Saigo-Maeda. Here, we establish the generalized fractional calculus formulas involving Bessel-Struve kernel function Sαλz,  λ,z∈C to obtain the results in terms of generalized Wright functions. The representations of Bessel-Struve kernel function in terms of exponential function and its relation with Bessel and Struve function are also discussed. The pathway integral representations of Bessel-Struve kernel function are also given in this study.

  12. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...

  13. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  14. Design and Implementation of the Connectionless Network Protocol (CLNP) as Loadable Kernel Modules in Linux Kernel 2.6

    CERN Document Server

    Sugiarto, Bunga; Rizal, Arra'di Nur; Galinium, Maulahikmah; Atmadiputra, Pradana; Rubianto, Melvin; Fahmi, Husni; Sampurno, Tri; Kisworo, Marsudi

    2012-01-01

    In this paper, we present an implementation of CLNP ground-to-ground packet processing for ATN in Linux kernel version 2.6. We present the big picture of CLNP packet processing, the details of input, routing, and output processing functions, and the implementation of each function based on ISO 8473-1. The functions implemented in this work are PDU header decomposition, header format analysis, header error detection, error reporting, reassembly, source routing, congestion notification, forwarding, composition, segmentation, and transmit to device functions. Each function is initially implemented and tested as a separated loadable kernel module. These modules are successfully loaded into Linux kernel 2.6.

  15. Bioconversion of palm kernel meal for aquaculture: Experiences ...

    African Journals Online (AJOL)

    SERVER

    2008-04-17

    Apr 17, 2008 ... countries where so much agro-industry by-products exist such as palm kernel meal, .... as basic ingredients for margarine production, confectionery, animal ..... Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia.

  16. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  17. OSCILLATORY SINGULAR INTEGRALS WITH VARIABLE ROUGH KERNEL, Ⅱ

    Institute of Scientific and Technical Information of China (English)

    Tang Lin; Yang Dachun

    2003-01-01

    Let n≥2. In this paper, the author establishes the L2(Rn)-boundedness of some oscillatory singular inte-grals with variable rough kernels by means of some estimates on hypergeometric functions and confluent hy-pergeometric funtions.

  18. A Security Kernel Architecture Based Trusted Computing Platform

    Institute of Scientific and Technical Information of China (English)

    CHEN You-lei; SHEN Chang-xiang

    2005-01-01

    A security kernel architecture built on trusted computing platform in the light of thinking about trusted computing is presented. According to this architecture, a new security module TCB (Trusted Computing Base) is added to the operation system kernel and two operation interface modes are provided for the sake of self-protection. The security kernel is divided into two parts and trusted mechanism is separated from security functionality. The TCB module implements the trusted mechanism such as measurement and attestation,while the other components of security kernel provide security functionality based on these mechanisms. This architecture takes full advantage of functions provided by trusted platform and clearly defines the security perimeter of TCB so as to assure self-security from architectural vision. We also present function description of TCB and discuss the strengths and limitations comparing with other related researches.

  19. A Thermodynamic Model for Argon Plasma Kernel Formation

    Directory of Open Access Journals (Sweden)

    James Keck

    2010-11-01

    Full Text Available Plasma kernel formation of argon is studied experimentally and theoretically. The experiments have been performed in a constant volume cylindrical vessel located in a shadowgraph system. The experiments have been done in constant pressure. The energy of plasma is supplied by an ignition system through two electrodes located in the vessel. The experiments have been done with two different spark energies to study the effect of input energy on kernel growth and its properties. A thermodynamic model employing mass and energy balance was developed to predict the experimental data. The agreement between experiments and model prediction is very good. The effect of various parameters such as initial temperature, initial radius of the kernel, and the radiation energy loss have been investigated and it has been concluded that initial condition is very important on formation and expansion of the kernel.

  20. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    conditions and finding useful additives to hinder the color to change rapidly. To be able to prove which methods of storing and additives work, Danisco wants to monitor the development of the color of meat in a slice of ham as a function of time, environment and ingredients. We have chosen to use multi...... methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...

  1. Palmprint Recognition by Applying Wavelet-Based Kernel PCA

    Institute of Scientific and Technical Information of China (English)

    Murat Ekinci; Murat Aykut

    2008-01-01

    This paper presents a wavelet-based kernel Principal Component Analysis (PCA) method by integrating the Daubechies wavelet representation of palm images and the kernel PCA method for palmprint recognition. Kernel PCA is a technique for nonlinear dimension reduction of data with an underlying nonlinear spatial structure. The intensity values of the palmprint image are first normalized by using mean and standard deviation. The palmprint is then transformed into the wavelet domain to decompose palm images and the lowest resolution subband coefficients are chosen for palm representation.The kernel PCA method is then applied to extract non-linear features from the subband coefficients. Finally, similarity measurement is accomplished by using weighted Euclidean linear distance-based nearest neighbor classifier. Experimental results on PolyU Palmprint Databases demonstrate that the proposed approach achieves highly competitive performance with respect to the published palmprint recognition approaches.

  2. Screening of the kernels of Pentadesma butyracea from various ...

    African Journals Online (AJOL)

    Gwla10

    The plant producing type 1 kernels (with medium length, low width and high thickness) and ... Recent works on the biological activities of P. butyracea showed that the ... collection sites belong to various agroecological zones of Benin. Thus ...

  3. 438 Adaptive Kernel in Meshsize Boosting Algorithm in KDE (Pp ...

    African Journals Online (AJOL)

    FIRST LADY

    2011-01-18

    Jan 18, 2011 ... classifier is boosted by suitably re-weighting the data. This weight ... Methods. Algorithm on Boosting Kernel Density Estimates and Bias Reduction ... Gaussian (since all distributions tend to be normal as n, the sample size,.

  4. Removal of Lead from Aqueous Solution by Palm Kernel Fibre

    African Journals Online (AJOL)

    NJD

    E. Augustine Ofomaja,a* I. Emmanuel Unuabonaha and N. Abiola Oladojab ... The sorption of lead on palm kernel fibre, an agricultural waste product, has been studied. ... present a cheap and cost-effective alternative for lead removal.

  5. Estimates of Oseen kernels in weighted $L^{p}$ spaces

    OpenAIRE

    KRAČMAR, Stanislav; Novotný, Antonín; Pokorný, Milan

    2001-01-01

    We study convolutions with Oseen kernels (weakly singular and singular) in both two- and three-dimensional space. We give a detailed weighted $L^{p}$ theory for $ p\\in(1;\\infty$ ] for anisotropic weights.

  6. INTEGRAL COLLISION KERNEL FOR THE GROWTH OF AEROSOL PARTICLES

    Institute of Scientific and Technical Information of China (English)

    Hongyong Xie

    2005-01-01

    Integral collision kernel is elucidated using experimental results for titania, silica and alumina nanoparticles synthesized by FCVD process, and titania submicron particles synthesized in a tube furnace reactor. The integral collision kernel was obtained from a particle number balance equation by the integration of collision rates from the kinetic theory of dilute gases for the free-molecule regime, from the Smoluchowski theory for the continuum regime, and by a semi-empirical interpolation for the transition regime between the two limiting regimes. Comparisons have been made on particle size and the integral collision kernel, showing that the predicted integral collision kernel agreed well with the experimental results in Knudsen number range from about 1.5 to 20.

  7. Analysis of Kernel Approach in Fuzzy-Based Image Classifications

    Directory of Open Access Journals (Sweden)

    Mragank Singhal

    2013-03-01

    Full Text Available This paper presents a framework of kernel approach in the field of fuzzy based image classification in remote sensing. The goal of image classification is to separate images according to their visual content into two or more disjoint classes. Fuzzy logic is relatively young theory. Major advantage of this theory is that it allows the natural description, in linguistic terms, of problems that should be solved rather than in terms of relationships between precise numerical values. This paper describes how remote sensing data with uncertainty are handled with fuzzy based classification using Kernel approach for land use/land cover maps generation. The introduction to fuzzification using Kernel approach provides the basis for the development of more robust approaches to the remote sensing classification problem. The kernel explicitly defines a similarity measure between two samples and implicitly represents the mapping of the input space to the feature space.

  8. Digital Communications Channel Equalisation Using the Kernel Adaline.

    OpenAIRE

    Mitchinson, B.; Harrison, R F

    2000-01-01

    For transmission of digital signals over a linear channel with additive white gaussian noise, it has been shown that the optimal symbol decision equaliser is non-linear. The Kernel Adaline algorithm, a non-linear generalisation of Widrow's and Hoff's Adaline, has been shown to be capable of learning arbitrary non-linear decision boundaries, whilst retaining the desirable convergence properties of the linear Adaline. This work investigates the use of the Kernel Adaline as equaliser for such ch...

  9. Music Emotion Detection Using Hierarchical Sparse Kernel Machines

    OpenAIRE

    Yu-Hao Chin; Chang-Hong Lin; Ernestasia Siahaan; Jia-Ching Wang

    2014-01-01

    For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-l...

  10. Reconstructing Concept Lattices Usingnth-Order Context Kernels

    Institute of Scientific and Technical Information of China (English)

    SHEN Xiajiong; XU Bin; LIU Zongtian

    2006-01-01

    To be different from traditional algorithms for concept lattice constructing, a method based on nth-order context kernel is suggested in this paper.The context kernels support generating small lattices for sub-contexts split by a given context.The final concept lattice is reconstructed by combining these small lattices.All relevant algorithms are implemented in a system IsoFCA.Test shows that the method yields concept lattices in lower time complexity than Godin algorithm in practical case.

  11. Nonlinear stochastic system identification of skin using volterra kernels.

    Science.gov (United States)

    Chen, Yi; Hunter, Ian W

    2013-04-01

    Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy.

  12. Bergman kernel and metric on non-smooth pseudoconvex domains

    Institute of Scientific and Technical Information of China (English)

    陈伯勇; 张锦豪

    1999-01-01

    A stability theorem of the Bergman kernel and completeness of the Bergman metric have been proved on a type of non-smooth pseudoconvex domains defined in the following way: D={z∈U|r(z)<0} where U is a neighbourhood of (?) and r is a continuous plurisubharmonic function on U. A continuity principle of the Bergman Kernel for pseudoconvex domains with Lipschitz boundary is also given, which answers a problem of Boas.

  13. A compact kernel for the calculus of inductive constructions

    Indian Academy of Sciences (India)

    A Asperti; W Ricciotti; C Sacerdoti Coen; E Tassi

    2009-02-01

    The paper describes the new kernel for the Calculus of Inductive Constructions (CIC) implemented inside the Matita Interactive Theorem Prover. The design of the new kernel has been completely revisited since the first release, resulting in a remarkably compact implementation of about 2300 lines of OCaml code. The work is meant for people interested in implementation aspects of Interactive Provers, and is not self contained. In particular, it requires good acquaintance with Type Theory and functional programming languages.

  14. Kernel approximation for solving few-body integral equations

    Science.gov (United States)

    Christie, I.; Eyre, D.

    1986-06-01

    This paper investigates an approximate method for solving integral equations that arise in few-body problems. The method is to replace the kernel by a degenerate kernel defined on a finite dimensional subspace of piecewise Lagrange polynomials. Numerical accuracy of the method is tested by solving the two-body Lippmann-Schwinger equation with non-separable potentials, and the three-body Amado-Lovelace equation with separable two-body potentials.

  15. GPU Acceleration of Image Convolution using Spatially-varying Kernel

    OpenAIRE

    Hartung, Steven; Shukla, Hemant; Miller, J. Patrick; Pennypacker, Carlton

    2012-01-01

    Image subtraction in astronomy is a tool for transient object discovery such as asteroids, extra-solar planets and supernovae. To match point spread functions (PSFs) between images of the same field taken at different times a convolution technique is used. Particularly suitable for large-scale images is a computationally intensive spatially-varying kernel. The underlying algorithm is inherently massively parallel due to unique kernel generation at every pixel location. The spatially-varying k...

  16. X-Y separable pyramid steerable scalable kernels

    OpenAIRE

    Shy, Douglas; Perona, Pietro

    1994-01-01

    A new method for generating X-Y separable, steerable, scalable approximations of filter kernels is proposed which is based on a generalization of the singular value decomposition (SVD) to three dimensions. This “pseudo-SVD” improves upon a previous scheme due to Perona (1992) in that it reduces convolution time and storage requirements. An adaptation of the pseudo-SVD is proposed to generate steerable and scalable kernels which are suitable for use with a Laplacian pyramid. The properties of ...

  17. ON APPROXIMATION BY REPRODUCING KERNEL SPACES IN WEIGHTED Lp SPACES

    Institute of Scientific and Technical Information of China (English)

    Baohuai SHENG

    2007-01-01

    In this paper, we investigate the order of approximation by reproducing kernel spaces on (-1, 1) in weighted Lp spaces. We first restate the translation network from the view of reproducing kernel spaces and then construct a sequence of approximating operators with the help of Jacobi orthogonal polynomials, with which we establish a kind of Jackson inequality to describe the error estimate.Finally, The results are used to discuss an approximation problem arising from learning theory.

  18. Genetic and physiological analysis of iron biofortification in maize kernels.

    Directory of Open Access Journals (Sweden)

    Mercy G Lung'aho

    Full Text Available BACKGROUND: Maize is a major cereal crop widely consumed in developing countries, which have a high prevalence of iron (Fe deficiency anemia. The major cause of Fe deficiency in these countries is inadequate intake of bioavailable Fe, where poverty is a major factor. Therefore, biofortification of maize by increasing Fe concentration and or bioavailability has great potential to alleviate this deficiency. Maize is also a model system for genomic research and thus allows the opportunity for gene discovery. Here we describe an integrated genetic and physiological analysis of Fe nutrition in maize kernels, to identify loci that influence grain Fe concentration and bioavailability. METHODOLOGY: Quantitative trait locus (QTL analysis was used to dissect grain Fe concentration (FeGC and Fe bioavailability (FeGB from the Intermated B73 × Mo17 (IBM recombinant inbred (RI population. FeGC was determined by ion coupled argon plasma emission spectroscopy (ICP. FeGB was determined by an in vitro digestion/Caco-2 cell line bioassay. CONCLUSIONS: Three modest QTL for FeGC were detected, in spite of high heritability. This suggests that FeGC is controlled by many small QTL, which may make it a challenging trait to improve by marker assisted breeding. Ten QTL for FeGB were identified and explained 54% of the variance observed in samples from a single year/location. Three of the largest FeGB QTL were isolated in sister derived lines and their effect was observed in three subsequent seasons in New York. Single season evaluations were also made at six other sites around North America, suggesting the enhancement of FeGB was not specific to our farm site. FeGB was not correlated with FeGC or phytic acid, suggesting that novel regulators of Fe nutrition are responsible for the differences observed. Our results indicate that iron biofortification of maize grain is achievable using specialized phenotyping tools and conventional plant breeding techniques.

  19. The Dynamic Kernel Scheduler-Part 1

    Science.gov (United States)

    Adelmann, Andreas; Locans, Uldis; Suter, Andreas

    2016-10-01

    Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software that uses these hardware accelerators introduces additional challenges for the developer. These challenges may include exposing increased parallelism, handling different hardware designs, and using multiple development frameworks in order to utilise devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between the host application and different hardware accelerators. DKS handles the communication between the host and the device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL, and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated into OPAL (Object-oriented Parallel Accelerator Library) in order to speed up a parallel FFT based Poisson solver and Monte Carlo simulations for particle-matter interaction used for proton therapy degrader modelling. DKS was also used together with Minuit2 for parameter fitting, where χ2 and max-log-likelihood functions were offloaded to the hardware accelerator. The concepts of the DKS, first results, and plans for the future will be shown in this paper.

  20. Characterization of Flour from Avocado Seed Kernel

    Directory of Open Access Journals (Sweden)

    Macey A. Mahawan

    2015-11-01

    Full Text Available The study focused on the Characterization of Flour from Avocado Seed Kernel. Based on the findings of the study the percentages of crude protein, crude fiber, crude fat, total carbohydrates, ash and moisture were 7.75, 4.91, 0.71, 74.65, 2.83 and 14.05 respectively. On the other hand the falling number was 495 seconds while gluten was below the detection limit of the method used. Moreover, the sensory evaluation in terms of color, texture and aroma in 0% proportion of Avocado seed flour was moderate like and slight like for 25% and 50% proportions of Avocado seed flour. On the otherhand, the taste of the biscuits prepared with 0% Avocado seed flour was moderate like, in 25% proportion of Avocado seed flour were slight like and in 50% proportion was neither liked nor disliked. The overall acceptability results for 0% proportion of Avocado seed flour was moderate like and slight like for 25% and 50% proportions of Avocado seed flour. Furthermore, the computed p values for the comparison of the level of acceptability in terms of color, texture, aroma, taste and overall acceptability of biscuits using 0%, 25%, and 50% avocado seed flour were lower than 0.05. Thus the null hypothesis is rejected.

  1. The spectrum of kernel random matrices

    CERN Document Server

    Karoui, Noureddine El

    2010-01-01

    We place ourselves in the setting of high-dimensional statistical inference where the number of variables $p$ in a dataset of interest is of the same order of magnitude as the number of observations $n$. We consider the spectrum of certain kernel random matrices, in particular $n\\times n$ matrices whose $(i,j)$th entry is $f(X_i'X_j/p)$ or $f(\\Vert X_i-X_j\\Vert^2/p)$ where $p$ is the dimension of the data, and $X_i$ are independent data vectors. Here $f$ is assumed to be a locally smooth function. The study is motivated by questions arising in statistics and computer science where these matrices are used to perform, among other things, nonlinear versions of principal component analysis. Surprisingly, we show that in high-dimensions, and for the models we analyze, the problem becomes essentially linear--which is at odds with heuristics sometimes used to justify the usage of these methods. The analysis also highlights certain peculiarities of models widely studied in random matrix theory and raises some questio...

  2. Local Kernel for Brains Classification in Schizophrenia

    Science.gov (United States)

    Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.

    In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.

  3. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  4. Kernel-Based Nonlinear Discriminant Analysis for Face Recognition

    Institute of Scientific and Technical Information of China (English)

    LIU QingShan (刘青山); HUANG Rui (黄锐); LU HanQing (卢汉清); MA SongDe (马颂德)

    2003-01-01

    Linear subspace analysis methods have been successfully applied to extract features for face recognition. But they are inadequate to represent the complex and nonlinear variations of real face images, such as illumination, facial expression and pose variations, because of their linear properties. In this paper, a nonlinear subspace analysis method, Kernel-based Nonlinear Discriminant Analysis (KNDA), is presented for face recognition, which combines the nonlinear kernel trick with the linear subspace analysis method - Fisher Linear Discriminant Analysis (FLDA).First, the kernel trick is used to project the input data into an implicit feature space, then FLDA is performed in this feature space. Thus nonlinear discriminant features of the input data are yielded. In addition, in order to reduce the computational complexity, a geometry-based feature vectors selection scheme is adopted. Another similar nonlinear subspace analysis is Kernel-based Principal Component Analysis (KPCA), which combines the kernel trick with linear Principal Component Analysis (PCA). Experiments are performed with the polynomial kernel, and KNDA is compared with KPCA and FLDA. Extensive experimental results show that KNDA can give a higher recognition rate than KPCA and FLDA.

  5. Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan

    This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi......This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation...... of the predictive regression model is based on a shrinkage estimator to avoid overfitting. We extend the kernel ridge regression methodology to enable its use for economic time-series forecasting, by including lags of the dependent variable or other individual variables as predictors, as typically desired...... in macroeconomic and financial applications. Monte Carlo simulations as well as an empirical application to various key measures of real economic activity confirm that kernel ridge regression can produce more accurate forecasts than traditional linear and nonlinear methods for dealing with many predictors based...

  6. Searching for efficient Markov chain Monte Carlo proposal kernels.

    Science.gov (United States)

    Yang, Ziheng; Rodríguez, Carlos E

    2013-11-26

    Markov chain Monte Carlo (MCMC) or the Metropolis-Hastings algorithm is a simulation algorithm that has made modern Bayesian statistical inference possible. Nevertheless, the efficiency of different Metropolis-Hastings proposal kernels has rarely been studied except for the Gaussian proposal. Here we propose a unique class of Bactrian kernels, which avoid proposing values that are very close to the current value, and compare their efficiency with a number of proposals for simulating different target distributions, with efficiency measured by the asymptotic variance of a parameter estimate. The uniform kernel is found to be more efficient than the Gaussian kernel, whereas the Bactrian kernel is even better. When optimal scales are used for both, the Bactrian kernel is at least 50% more efficient than the Gaussian. Implementation in a Bayesian program for molecular clock dating confirms the general applicability of our results to generic MCMC algorithms. Our results refute a previous claim that all proposals had nearly identical performance and will prompt further research into efficient MCMC proposals.

  7. Multiple Crop Classification Using Various Support Vector Machine Kernel Functions

    Directory of Open Access Journals (Sweden)

    Rupali R. Surase

    2015-01-01

    Full Text Available This study was carried out with techniques of Remote Sensing (RS based crop discrimination and area estimation with single date approach. Several kernel functions are employed and compared in this study for mapping the input space with including linear, sigmoid, and polynomial and Radial Basis Function (RBF. The present study highlights the advantages of Remote Sensing (RS and Geographic Information System (GIS techniques for analyzing the land use/land cover mapping for Aurangabad region of Maharashtra, India. Single date, cloud free IRS-Resourcesat-1 LISS-III data was used for further classification on training set for supervised classification. ENVI 4.4 is used for image analysis and interpretation. The experimental tests show that system is achieved 94.82% using SVM with kernel functions including Polynomial kernel function compared with Radial Basis Function, Sigmoid and linear kernel. The Overall Accuracy (OA to up to 5.17% in comparison to using sigmoid kernel function, and up to 3.45% in comparison to a 3rd degree polynomial kernel function and RBF with 200 as a penalty parameter.

  8. Sparse kernel learning with LASSO and Bayesian inference algorithm.

    Science.gov (United States)

    Gao, Junbin; Kwan, Paul W; Shi, Daming

    2010-03-01

    Kernelized LASSO (Least Absolute Selection and Shrinkage Operator) has been investigated in two separate recent papers [Gao, J., Antolovich, M., & Kwan, P. H. (2008). L1 LASSO and its Bayesian inference. In W. Wobcke, & M. Zhang (Eds.), Lecture notes in computer science: Vol. 5360 (pp. 318-324); Wang, G., Yeung, D. Y., & Lochovsky, F. (2007). The kernel path in kernelized LASSO. In International conference on artificial intelligence and statistics (pp. 580-587). San Juan, Puerto Rico: MIT Press]. This paper is concerned with learning kernels under the LASSO formulation via adopting a generative Bayesian learning and inference approach. A new robust learning algorithm is proposed which produces a sparse kernel model with the capability of learning regularized parameters and kernel hyperparameters. A comparison with state-of-the-art methods for constructing sparse regression models such as the relevance vector machine (RVM) and the local regularization assisted orthogonal least squares regression (LROLS) is given. The new algorithm is also demonstrated to possess considerable computational advantages. Copyright 2009 Elsevier Ltd. All rights reserved.

  9. Silage quality of Piata palisadegrass with palm kernel cake

    Directory of Open Access Journals (Sweden)

    Rângelis de Sousa Figueredo

    2014-02-01

    Full Text Available This study was developed to evaluate silage quality of Piata palisadegrass with palm kernel cake (Elaeis guineensis Jacq.. The experiment was carried out at the Federal Institute of Goiás State, Campus Rio Verde, in a completely randomized design with four treatments and five repetitions. The treatments consisted of Piata palisadegrass ensiled with palm kernel in the levels of 0, 5, 10 and 15% on a natural basis of the Piata palisadegrass. The material was minced, mixed, packed into experimental silos and opened after 60 days of fermentation. The palm kernel cake is an agro-industrial by-product that can enrich the silage, increasing its nutritional value.The addition of palm kernel cake improved the fermentative and bromatological parameters of the silage, increasing the dry matter, crude protein, ether extract, and total digestible nutrients, with a reduction in the fiber fraction, values of pH, ammonia nitrogen, and titratable acidity. The use of palm kernel cake in Piata palisadegrass silage increase the fractions A, B1, B2 and in vitro dry matter digestibility, and decrease the fractions B3 and C. For achieving the best quality silage it is recommended the addition of 15% palm kernel cake.

  10. Kernels by Monochromatic Paths and Color-Perfect Digraphs

    Directory of Open Access Journals (Sweden)

    Galeana-Śanchez Hortensia

    2016-05-01

    Full Text Available For a digraph D, V (D and A(D will denote the sets of vertices and arcs of D respectively. In an arc-colored digraph, a subset K of V(D is said to be kernel by monochromatic paths (mp-kernel if (1 for any two different vertices x, y in N there is no monochromatic directed path between them (N is mp-independent and (2 for each vertex u in V (D \\ N there exists v ∈ N such that there is a monochromatic directed path from u to v in D (N is mp-absorbent. If every arc in D has a different color, then a kernel by monochromatic paths is said to be a kernel. Two associated digraphs to an arc-colored digraph are the closure and the color-class digraph CC(D. In this paper we will approach an mp-kernel via the closure of induced subdigraphs of D which have the property of having few colors in their arcs with respect to D. We will introduce the concept of color-perfect digraph and we are going to prove that if D is an arc-colored digraph such that D is a quasi color-perfect digraph and CC(D is not strong, then D has an mp-kernel. Previous interesting results are generalized, as for example Richardson′s Theorem.

  11. Kernel CMAC: an Efficient Neural Network for Classification and Regression

    Directory of Open Access Journals (Sweden)

    Gábor Horváth

    2006-01-01

    Full Text Available Kernel methods in learning machines have been developed in the last decade asnew techniques for solving classification and regression problems. Kernel methods havemany advantageous properties regarding their learning and generalization capabilities,but for getting the solution usually the computationally complex quadratic programming isrequired. To reduce computational complexity a lot of different versions have beendeveloped. These versions apply different kernel functions, utilize the training data indifferent ways or apply different criterion functions. This paper deals with a special kernelnetwork, which is based on the CMAC neural network. Cerebellar Model ArticulationController (CMAC has some attractive features: fast learning capability and thepossibility of efficient digital hardware implementation. Besides these attractive featuresthe modelling and generalization capabilities of a CMAC may be rather limited. The papershows that kernel CMAC – an extended version of the classical CMAC networkimplemented in a kernel form – improves that properties of the classical versionsignificantly. Both the modelling and the generalization capabilities are improved while thelimited computational complexity is maintained. The paper shows the architecture of thisnetwork and presents the relation between the classical CMAC and the kernel networks.The operation of the proposed architecture is illustrated using some common benchmarkproblems.

  12. Out-of-Sample Extensions for Non-Parametric Kernel Methods.

    Science.gov (United States)

    Pan, Binbin; Chen, Wen-Sheng; Chen, Bo; Xu, Chen; Lai, Jianhuang

    2017-02-01

    Choosing suitable kernels plays an important role in the performance of kernel methods. Recently, a number of studies were devoted to developing nonparametric kernels. Without assuming any parametric form of the target kernel, nonparametric kernel learning offers a flexible scheme to utilize the information of the data, which may potentially characterize the data similarity better. The kernel methods using nonparametric kernels are referred to as nonparametric kernel methods. However, many nonparametric kernel methods are restricted to transductive learning, where the prediction function is defined only over the data points given beforehand. They have no straightforward extension for the out-of-sample data points, and thus cannot be applied to inductive learning. In this paper, we show how to make the nonparametric kernel methods applicable to inductive learning. The key problem of out-of-sample extension is how to extend the nonparametric kernel matrix to the corresponding kernel function. A regression approach in the hyper reproducing kernel Hilbert space is proposed to solve this problem. Empirical results indicate that the out-of-sample performance is comparable to the in-sample performance in most cases. Experiments on face recognition demonstrate the superiority of our nonparametric kernel method over the state-of-the-art parametric kernel methods.

  13. SVM-based CAD system for early detection of the Alzheimer's disease using kernel PCA and LDA.

    Science.gov (United States)

    López, M M; Ramírez, J; Górriz, J M; Alvarez, I; Salas-Gonzalez, D; Segovia, F; Chaves, R

    2009-10-30

    Single-photon emission tomography (SPECT) imaging has been widely used to guide clinicians in the early Alzheimer's disease (AD) diagnosis challenge. However, AD detection still relies on subjective steps carried out by clinicians, which entail in some way subjectivity to the final diagnosis. In this work, kernel principal component analysis (PCA) and linear discriminant analysis (LDA) are applied on functional images as dimension reduction and feature extraction techniques, which are subsequently used to train a supervised support vector machine (SVM) classifier. The complete methodology provides a kernel-based computer-aided diagnosis (CAD) system capable to distinguish AD from normal subjects with 92.31% accuracy rate for a SPECT database consisting of 91 patients. The proposed methodology outperforms voxels-as-features (VAF) that was considered as baseline approach, which yields 80.22% for the same SPECT database.

  14. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    Science.gov (United States)

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power

  15. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  16. Language experience changes subsequent learning.

    Science.gov (United States)

    Onnis, Luca; Thiessen, Erik

    2013-02-01

    What are the effects of experience on subsequent learning? We explored the effects of language-specific word order knowledge on the acquisition of sequential conditional information. Korean and English adults were engaged in a sequence learning task involving three different sets of stimuli: auditory linguistic (nonsense syllables), visual non-linguistic (nonsense shapes), and auditory non-linguistic (pure tones). The forward and backward probabilities between adjacent elements generated two equally probable and orthogonal perceptual parses of the elements, such that any significant preference at test must be due to either general cognitive biases, or prior language-induced biases. We found that language modulated parsing preferences with the linguistic stimuli only. Intriguingly, these preferences are congruent with the dominant word order patterns of each language, as corroborated by corpus analyses, and are driven by probabilistic preferences. Furthermore, although the Korean individuals had received extensive formal explicit training in English and lived in an English-speaking environment, they exhibited statistical learning biases congruent with their native language. Our findings suggest that mechanisms of statistical sequential learning are implicated in language across the lifespan, and experience with language may affect cognitive processes and later learning. Copyright © 2012 Elsevier B.V. All rights reserved.

  17. Mean kernels to improve gravimetric geoid determination based on modified Stokes's integration

    Science.gov (United States)

    Hirt, C.

    2011-11-01

    Gravimetric geoid computation is often based on modified Stokes's integration, where Stokes's integral is evaluated with some stochastic or deterministic kernel modification. Accurate numerical evaluation of Stokes's integral requires the modified kernel to be integrated across the area of each discretised grid cell (mean kernel). Evaluating the modified kernel at the center of the cell (point kernel) is an approximation, which may result in larger numerical integration errors near the computation point, where the modified kernel exhibits a strongly nonlinear behavior. The present study deals with the computation of whole-of-the-cell mean values of modified kernels, exemplified here with the Featherstone-Evans-Olliver (1998) kernel modification [Featherstone, W.E., Evans, J.D., Olliver, J.G., 1998. A Meissl-modified Vaníček and Kleusberg kernel to reduce the truncation error in gravimetric geoid computations. Journal of Geodesy 72(3), 154-160]. We investigate two approaches (analytical and numerical integration), which are capable of providing accurate mean kernels. The analytical integration approach is based on kernel weighting factors which are used for the conversion of point to mean kernels. For the efficient numerical integration, Gauss-Legendre quadrature is applied. The comparison of mean kernels from both approaches shows a satisfactory mutual agreement at the level of 10 -4 and better, which is considered to be sufficient for practical geoid computation requirements. Closed-loop tests based on the EGM2008 geopotential model demonstrate that using mean instead of point kernels reduces numerical integration errors by ˜65%. The use of mean kernels is recommended in remove-compute-restore geoid determination with the Featherstone-Evans-Olliver (1998) kernel or any other kernel modification under the condition that the kernel changes rapidly across the cells in the neighborhood of the computation point.

  18. Capturing Option Anomalies with a Variance-Dependent Pricing Kernel

    DEFF Research Database (Denmark)

    Christoffersen, Peter; Heston, Steven; Jacobs, Kris

    2013-01-01

    We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric ...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation......We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...

  19. CELLULOSE EXTRACTION FROM PALM KERNEL CAKE USING LIQUID PHASE OXIDATION

    Directory of Open Access Journals (Sweden)

    FARM YAN YAN

    2009-03-01

    Full Text Available Cellulose is widely used in many aspect and industries such as food industry, pharmaceutical, paint, polymers, and many more. Due to the increasing demand in the market, studies and work to produce cellulose are still rapidly developing. In this work, liquid phase oxidation was used to extract cellulose from palm kernel cake to separate hemicellulose, cellulose and lignin. The method is basically a two-step process. Palm kernel cake was pretreated in hot water at 180°C and followed by liquid oxidation process with 30% H2O2 at 60°C at atmospheric pressure. The process parameters are hot water treatment time, ratio of palm kernel cake to H2O2, liquid oxidation reaction temperature and time. Analysis of the process parameters on production cellulose from palm kernel cake was performed by using Response Surface Methodology. The recovered cellulose was further characterized by Fourier Transform Infrared (FTIR. Through the hot water treatment, hemicellulose in the palm kernel cake was successfully recovered as saccharides and thus leaving lignin and cellulose. Lignin was converted to water soluble compounds in liquid oxidation step which contains small molecular weight fatty acid as HCOOH and CH3COOH and almost pure cellulose was recovered.

  20. CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme

    Science.gov (United States)

    Frontiere, Nicholas; Raskin, Cody D.; Owen, J. Michael

    2017-03-01

    We present a formulation of smoothed particle hydrodynamics (SPH) that utilizes a first-order consistent reproducing kernel, a smoothing function that exactly interpolates linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, linear momentum, and energy are all rigorously conserved without any assumption about kernel symmetries, while additionally maintaining approximate angular momentum conservation. Our approach starts from a rigorously consistent interpolation theory, where we derive the evolution equations to enforce the appropriate conservation properties, at the sacrifice of full consistency in the momentum equation. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains many benefits of traditional SPH methods (such as preserving Galilean invariance and manifest conservation of mass, momentum, and energy) while improving on many of the shortcomings of SPH, particularly the overly aggressive artificial viscosity and zeroth-order inaccuracy. We compare CRKSPH to two different modern SPH formulations (pressure based SPH and compatibly differenced SPH), demonstrating the advantages of our new formulation when modeling fluid mixing, strong shock, and adiabatic phenomena.

  1. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function.

  2. Face detection based on multiple kernel learning algorithm

    Science.gov (United States)

    Sun, Bo; Cao, Siming; He, Jun; Yu, Lejun

    2016-09-01

    Face detection is important for face localization in face or facial expression recognition, etc. The basic idea is to determine whether there is a face in an image or not, and also its location, size. It can be seen as a binary classification problem, which can be well solved by support vector machine (SVM). Though SVM has strong model generalization ability, it has some limitations, which will be deeply analyzed in the paper. To access them, we study the principle and characteristics of the Multiple Kernel Learning (MKL) and propose a MKL-based face detection algorithm. In the paper, we describe the proposed algorithm in the interdisciplinary research perspective of machine learning and image processing. After analyzing the limitation of describing a face with a single feature, we apply several ones. To fuse them well, we try different kernel functions on different feature. By MKL method, the weight of each single function is determined. Thus, we obtain the face detection model, which is the kernel of the proposed method. Experiments on the public data set and real life face images are performed. We compare the performance of the proposed algorithm with the single kernel-single feature based algorithm and multiple kernels-single feature based algorithm. The effectiveness of the proposed algorithm is illustrated. Keywords: face detection, feature fusion, SVM, MKL

  3. A one-class kernel fisher criterion for outlier detection.

    Science.gov (United States)

    Dufrenois, Franck

    2015-05-01

    Recently, Dufrenois and Noyer proposed a one class Fisher's linear discriminant to isolate normal data from outliers. In this paper, a kernelized version of their criterion is presented. Originally on the basis of an iterative optimization process, alternating between subspace selection and clustering, I show here that their criterion has an upper bound making these two problems independent. In particular, the estimation of the label vector is formulated as an unconstrained binary linear problem (UBLP) which can be solved using an iterative perturbation method. Once the label vector is estimated, an optimal projection subspace is obtained by solving a generalized eigenvalue problem. Like many other kernel methods, the performance of the proposed approach depends on the choice of the kernel. Constructed with a Gaussian kernel, I show that the proposed contrast measure is an efficient indicator for selecting an optimal kernel width. This property simplifies the model selection problem which is typically solved by costly (generalized) cross-validation procedures. Initialization, convergence analysis, and computational complexity are also discussed. Lastly, the proposed algorithm is compared with recent novelty detectors on synthetic and real data sets.

  4. A Novel Kernel for Least Squares Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    FENG Wei; ZHAO Yong-ping; DU Zhong-hua; LI De-cai; WANG Li-feng

    2012-01-01

    Extreme learning machine(ELM) has attracted much attention in recent years due to its fast convergence and good performance.Merging both ELM and support vector machine is an important trend,thus yielding an ELM kernel.ELM kernel based methods are able to solve the nonlinear problems by inducing an explicit mapping compared with the commonly-used kernels such as Gaussian kernel.In this paper,the ELM kernel is extended to the least squares support vector regression(LSSVR),so ELM-LSSVR was proposed.ELM-LSSVR can be used to reduce the training and test time simultaneously without extra techniques such as sequential minimal optimization and pruning mechanism.Moreover,the memory space for the training and test was relieved.To confirm the efficacy and feasibility of the proposed ELM-LSSVR,the experiments are reported to demonstrate that ELM-LSSVR takes the advantage of training and test time with comparable accuracy to other algorithms.

  5. Aleurone cell identity is suppressed following connation in maize kernels.

    Science.gov (United States)

    Geisler-Lee, Jane; Gallie, Daniel R

    2005-09-01

    Expression of the cytokinin-synthesizing isopentenyl transferase enzyme under the control of the Arabidopsis (Arabidopsis thaliana) SAG12 senescence-inducible promoter reverses the normal abortion of the lower floret from a maize (Zea mays) spikelet. Following pollination, the upper and lower floret pistils fuse, producing a connated kernel with two genetically distinct embryos and the endosperms fused along their abgerminal face. Therefore, ectopic synthesis of cytokinin was used to position two independent endosperms within a connated kernel to determine how the fused endosperm would affect the development of the two aleurone layers along the fusion plane. Examination of the connated kernel revealed that aleurone cells were present for only a short distance along the fusion plane whereas starchy endosperm cells were present along most of the remainder of the fusion plane, suggesting that aleurone development is suppressed when positioned between independent starchy endosperms. Sporadic aleurone cells along the fusion plane were observed and may have arisen from late or imperfect fusion of the endosperms of the connated kernel, supporting the observation that a peripheral position at the surface of the endosperm and not proximity to maternal tissues such as the testa and pericarp are important for aleurone development. Aleurone mosaicism was observed in the crown region of nonconnated SAG12-isopentenyl transferase kernels, suggesting that cytokinin can also affect aleurone development.

  6. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  7. Spine labeling in axial magnetic resonance imaging via integral kernels.

    Science.gov (United States)

    Miles, Brandon; Ben Ayed, Ismail; Hojjat, Seyed-Parsa; Wang, Michael H; Li, Shuo; Fenster, Aaron; Garvin, Gregory J

    2016-12-01

    This study investigates a fast integral-kernel algorithm for classifying (labeling) the vertebra and disc structures in axial magnetic resonance images (MRI). The method is based on a hierarchy of feature levels, where pixel classifications via non-linear probability product kernels (PPKs) are followed by classifications of 2D slices, individual 3D structures and groups of 3D structures. The algorithm further embeds geometric priors based on anatomical measurements of the spine. Our classifier requires evaluations of computationally expensive integrals at each pixel, and direct evaluations of such integrals would be prohibitively time consuming. We propose an efficient computation of kernel density estimates and PPK evaluations for large images and arbitrary local window sizes via integral kernels. Our method requires a single user click for a whole 3D MRI volume, runs nearly in real-time, and does not require an intensive external training. Comprehensive evaluations over T1-weighted axial lumbar spine data sets from 32 patients demonstrate a competitive structure classification accuracy of 99%, along with a 2D slice classification accuracy of 88%. To the best of our knowledge, such a structure classification accuracy has not been reached by the existing spine labeling algorithms. Furthermore, we believe our work is the first to use integral kernels in the context of medical images. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. 7 CFR 1401.8 - Subsequent holders.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 10 2010-01-01 2010-01-01 false Subsequent holders. 1401.8 Section 1401.8 Agriculture... PAYMENT § 1401.8 Subsequent holders. (a) General. A person who acquires a commodity certificate from another person shall be considered to be a “subsequent holder” of the certificate. Subsequent holders...

  9. Lifting kernel-based sprite codec

    Science.gov (United States)

    Dasu, Aravind R.; Panchanathan, Sethuraman

    2000-12-01

    The International Standards Organization (ISO) has proposed a family of standards for compression of image and video sequences, including the JPEG, MPEG-1 and MPEG-2. The latest MPEG-4 standard has many new dimensions to coding and manipulation of visual content. A video sequence usually contains a background object and many foreground objects. Portions of this background may not be visible in certain frames due to the occlusion of the foreground objects or camera motion. MPEG-4 introduces the novel concepts of Video Object Planes (VOPs) and Sprites. A VOP is a visual representation of real world objects with shapes that need not be rectangular. Sprite is a large image composed of pixels belonging to a video object visible throughout a video segment. Since a sprite contains all parts of the background that were at least visible once, it can be used for direct reconstruction of the background Video Object Plane (VOP). Sprite reconstruction is dependent on the mode in which it is transmitted. In the Static sprite mode, the entire sprite is decoded as an Intra VOP before decoding the individual VOPs. Since sprites consist of the information needed to display multiple frames of a video sequence, they are typically much larger than a single frame of video. Therefore a static sprite can be considered as a large static image. In this paper, a novel solution to address the problem of spatial scalability has been proposed, where the sprite is encoded in Discrete Wavelet Transform (DWT). A lifting kernel method of DWT implementation has been used for encoding and decoding sprites. Modifying the existing lifting scheme while maintaining it to be shape adaptive results in a reduced complexity. The proposed scheme has the advantages of (1) avoiding the need for any extensions to image or tile border pixels and is hence superior to the DCT based low latency scheme (used in the current MPEG-4 verification model), (2) mapping the in place computed wavelet coefficients into a zero

  10. Determination of saponins in the kernel cake of Balanites aegyptiaca by HPLC-ESI/MS.

    Science.gov (United States)

    Chapagain, Bishnu P; Wiesman, Zeev

    2007-01-01

    The kernel cake produced from Balanites aegyptiaca fruit of Israeli origin was analysed for its saponin constituents using high-performance liquid chromatography-mass spectrometry (HPLC-MS). The HPLC was equipped with a reversed-phase C18 column and a refractive index detector (RID), and elution was isocratic with methanol and water (70:30). The MS system was equipped with electrospray ionisation (ESI). Nine compounds were chromatographically separated, their masses were determined in the negative ion mode and subsequent fragmentation of each component was carried out. From the nine components, six saponins with molecular masses of 1196, 1064, 1210, 1224, 1078 and 1046 Da were identified, with the compound of mass 1210 Da being the main saponin (ca. 36%). Saponins with masses of 1224 and 1046 Da have not been previously reported in B. aegyptiaca. In all saponins, diosgenin was found to be the sole aglycone. This study shows that HPLC-ESI/MS is a quick and reliable technique for characterizing the saponins from kernel cake of B. aegyptiaca.

  11. Auto-Pattern Programmable Kernel Filter (Auto-PPKF for Suppression of Bot Generated Traffic

    Directory of Open Access Journals (Sweden)

    Kritika Govind

    2013-11-01

    Full Text Available Bots usually vary from their other malicious counter parts by periodically reporting to the botmaster through regular exchange of messages. Our experiments on bot attack generation showed a continuous exchange of packets with similar content between the botmaster and the zombie machine at various time intervals. Though there were also genuine packets with similar content being sent out of the victim machine challenge was to differentiate between the two and pass only the genuine ones. In this paper, an algorithm namely Auto-Pattern Programmable Kernel Filter (Auto-PPKF, for automatic detection of patterns from packet payload for filtering out malicious packets generated by bots is proposed. The significant feature of our proposed Auto-PPKF algorithm is that, the malicious pattern is deduced at kernel level on the fly from packet payload. Traditional algorithms such as Boyer Moore, Knuth Morris Patt, and Naive Pattern search algorithms require the pattern to be identified available a priori. Currently, Longest Common Subsequence (LCS algorithm stands as the most preferred algorithm for pattern matching. But the disadvantage is that common sequences can also exist in many genuine packets. Hence, the challenge lies in automatic detection of malicious patterns and filtering of the packets having such malicious patterns. This would not only put off the communication between the Botmaster and Zombie machine, but will also thus prevent user information from being sent to the botmaster.

  12. Producing data-based sensitivity kernels from convolution and correlation in exploration geophysics.

    Science.gov (United States)

    Chmiel, M. J.; Roux, P.; Herrmann, P.; Rondeleux, B.

    2016-12-01

    Many studies have shown that seismic interferometry can be used to estimate surface wave arrivals by correlation of seismic signals recorded at a pair of locations. In the case of ambient noise sources, the convergence towards the surface wave Green's functions is obtained with the criterion of equipartitioned energy. However, seismic acquisition with active, controlled sources gives more possibilities when it comes to interferometry. The use of controlled sources makes it possible to recover the surface wave Green's function between two points using either correlation or convolution. We investigate the convolutional and correlational approaches using land active-seismic data from exploration geophysics. The data were recorded on 10,710 vertical receivers using 51,808 sources (seismic vibrator trucks). The sources spacing is the same in both X and Y directions (30 m) which is known as a "carpet shooting". The receivers are placed in parallel lines with a spacing 150 m in the X direction and 30 m in the Y direction. Invoking spatial reciprocity between sources and receivers, correlation and convolution functions can thus be constructed between either pairs of receivers or pairs of sources. Benefiting from the dense acquisition, we extract sensitivity kernels from correlation and convolution measurements of the seismic data. These sensitivity kernels are subsequently used to produce phase-velocity dispersion curves between two points and to separate the higher mode from the fundamental mode for surface waves. Potential application to surface wave cancellation is also envisaged.

  13. Kernel density estimation of a multidimensional efficiency profile

    CERN Document Server

    Poluektov, Anton

    2014-01-01

    Kernel density estimation is a convenient way to estimate the probability density of a distribution given the sample of data points. However, it has certain drawbacks: proper description of the density using narrow kernels needs large data samples, whereas if the kernel width is large, boundaries and narrow structures tend to be smeared. Here, an approach to correct for such effects, is proposed that uses an approximate density to describe narrow structures and boundaries. The approach is shown to be well suited for the description of the efficiency shape over a multidimensional phase space in a typical particle physics analysis. An example is given for the five-dimensional phase space of the $\\Lambda_b^0\\to D^0p\\pi$ decay.

  14. A method of smoothed particle hydrodynamics using spheroidal kernels

    Science.gov (United States)

    Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.

    1995-01-01

    We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.

  15. Synthesis And Characterization of Biodiesel From Nigerian Palm Kernel oil.

    Directory of Open Access Journals (Sweden)

    IGBOKWE, J. O.

    2016-07-01

    Full Text Available Biodiesel was produced from Nigerian Palm kernel oil through direct base- catalyzed transesterification process using methanol and sodium hydroxide as alcohol and catalyst respectively. The transesterification process involved 1 liter of Palm kernel oil, 200ml of methanol, 1.0% NaOH, reaction temperature of 65 degree Celsius and reaction time of 90mins and an average biodiesel yield of 87.67% was obtained. The produced biodiesel was blended with diesel fuel at a ratio of 20% biodiesel to 80% diesel fuel (by volume. The neat biodiesel and its blend were characterized using the ASTM methods. The results showed that the properties of the neat palm kernel oil biodiesel and its blend fall within the American Society for Testing and Materials (ASTM specifications for Biodiesel fuels hence confirming their suitability as alternative fuels for modern diesel engines.

  16. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.

    Science.gov (United States)

    Li, Shuang; Liu, Bing; Zhang, Chen

    2016-01-01

    Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  17. Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing

    Directory of Open Access Journals (Sweden)

    Shuang Li

    2016-01-01

    Full Text Available Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.

  18. Semisupervised Kernel Marginal Fisher Analysis for Face Recognition

    Directory of Open Access Journals (Sweden)

    Ziqiang Wang

    2013-01-01

    Full Text Available Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.

  19. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  20. Single aflatoxin contaminated corn kernel analysis with fluorescence hyperspectral image

    Science.gov (United States)

    Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Ononye, Ambrose; Brown, Robert L.; Cleveland, Thomas E.

    2010-04-01

    Aflatoxins are toxic secondary metabolites of the fungi Aspergillus flavus and Aspergillus parasiticus, among others. Aflatoxin contaminated corn is toxic to domestic animals when ingested in feed and is a known carcinogen associated with liver and lung cancer in humans. Consequently, aflatoxin levels in food and feed are regulated by the Food and Drug Administration (FDA) in the US, allowing 20 ppb (parts per billion) limits in food and 100 ppb in feed for interstate commerce. Currently, aflatoxin detection and quantification methods are based on analytical tests including thin-layer chromatography (TCL) and high performance liquid chromatography (HPLC). These analytical tests require the destruction of samples, and are costly and time consuming. Thus, the ability to detect aflatoxin in a rapid, nondestructive way is crucial to the grain industry, particularly to corn industry. Hyperspectral imaging technology offers a non-invasive approach toward screening for food safety inspection and quality control based on its spectral signature. The focus of this paper is to classify aflatoxin contaminated single corn kernels using fluorescence hyperspectral imagery. Field inoculated corn kernels were used in the study. Contaminated and control kernels under long wavelength ultraviolet excitation were imaged using a visible near-infrared (VNIR) hyperspectral camera. The imaged kernels were chemically analyzed to provide reference information for image analysis. This paper describes a procedure to process corn kernels located in different images for statistical training and classification. Two classification algorithms, Maximum Likelihood and Binary Encoding, were used to classify each corn kernel into "control" or "contaminated" through pixel classification. The Binary Encoding approach had a slightly better performance with accuracy equals to 87% or 88% when 20 ppb or 100 ppb was used as classification threshold, respectively.

  1. FUV Continuum in Flare Kernels Observed by IRIS

    Science.gov (United States)

    Daw, Adrian N.; Kowalski, Adam; Allred, Joel C.; Cauzzi, Gianna

    2016-05-01

    Fits to Interface Region Imaging Spectrograph (IRIS) spectra observed from bright kernels during the impulsive phase of solar flares are providing long-sought constraints on the UV/white-light continuum emission. Results of fits of continua plus numerous atomic and molecular emission lines to IRIS far ultraviolet (FUV) spectra of bright kernels are presented. Constraints on beam energy and cross sectional area are provided by cotemporaneous RHESSI, FERMI, ROSA/DST, IRIS slit-jaw and SDO/AIA observations, allowing for comparison of the observed IRIS continuum to calculations of non-thermal electron beam heating using the RADYN radiative-hydrodynamic loop model.

  2. Aflatoxin detection in whole corn kernels using hyperspectral methods

    Science.gov (United States)

    Casasent, David; Chen, Xue-Wen

    2004-03-01

    Hyperspectral (HS) data for the inspection of whole corn kernels for aflatoxin is considered. The high-dimensionality of HS data requires feature extraction or selection for good classifier generalization. For fast and inexpensive data collection, only several features (λ responses) can be used. These are obtained by feature selection from the full HS response. A new high dimensionality branch and bound (HDBB) feature selection algorithm is used; it is found to be optimum, fast and very efficient. Initial results indicate that HS data is very promising for aflatoxin detection in whole kernel corn.

  3. Employment of kernel methods on wind turbine power performance assessment

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.

    2015-01-01

    A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...

  4. CELLULOSE EXTRACTION FROM PALM KERNEL CAKE USING LIQUID PHASE OXIDATION

    OpenAIRE

    FARM YAN YAN; DUDUKU KRISHNIAH; MARIANI RAJIN; AWANG BONO

    2009-01-01

    Cellulose is widely used in many aspect and industries such as food industry, pharmaceutical, paint, polymers, and many more. Due to the increasing demand in the market, studies and work to produce cellulose are still rapidly developing. In this work, liquid phase oxidation was used to extract cellulose from palm kernel cake to separate hemicellulose, cellulose and lignin. The method is basically a two-step process. Palm kernel cake was pretreated in hot water at 180°C and followed by liquid ...

  5. On the bargaining set, kernel and core of superadditive games

    OpenAIRE

    TamÂs Solymosi

    1999-01-01

    We prove that for superadditive games a necessary and sufficient condition for the bargaining set to coincide with the core is that the monotonic cover of the excess game induced by a payoff be balanced for each imputation in the bargaining set. We present some new results obtained by verifying this condition for specific classes of games. For N-zero-monotonic games we show that the same condition required at each kernel element is also necessary and sufficient for the kernel to be contained ...

  6. Improved Interpolation Kernels for Super-resolution Algorithms

    DEFF Research Database (Denmark)

    Rasti, Pejman; Orlova, Olga; Tamberg, Gert

    2016-01-01

    is usually tuned through different methods, like learning-based or fusion-based methods, to converge the initial guess towards the desired HR output. In this work, it is shown that SR algorithms can result in better performance if more sophisticated kernels than the simple conventional ones are used...... for producing the initial guess. The contribution of this work is to introduce such a set of kernels which can be used in the context of SR. The quantitative and qualitative results on many natural, facial and iris images show the superiority of the generated HR images over two state-of-the-art SR algorithms...

  7. Some remarks about interpolating sequences in reproducing kernel Hilbert spaces

    CERN Document Server

    Raghupathi, Mrinal

    2011-01-01

    In this paper we study two separate problems on interpolation. We first give a new proof of Stout's Theorem on necessary and sufficient conditions for a sequence of points to be an interpolating sequence for the multiplier algebra and for an associated Hilbert space. We next turn our attention to the question of interpolation for reproducing kernel Hilbert spaces on the polydisc and provide a collection of equivalent statements about when it is possible to interpolation in the Schur-Agler class of the associated reproducing kernel Hilbert space.

  8. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre...... by incorporating a loss term, leading to an iterative algorithm for finding orthonormal components biased by the class labels, and (2) a fixed-point iteration for solving the pre-image problem based on a manifold warped RKHS. We prove viability of the proposed methods on both synthetic data and images from...

  9. Asymptotics of Markov Kernels and the Tail Chain

    CERN Document Server

    Resnick, Sidney I

    2011-01-01

    An asymptotic model for extreme behavior of certain Markov chains is the "tail chain". Generally taking the form of a multiplicative random walk, it is useful in deriving extremal characteristics such as point process limits. We place this model in a more general context, formulated in terms of extreme value theory for transition kernels, and extend it by formalizing the distinction between extreme and non-extreme states. We make the link between the update function and transition kernel forms considered in previous work, and we show that the tail chain model leads to a multivariate regular variation property of the finite-dimensional distributions under assumptions on the marginal tails alone.

  10. Wrapping Brownian motion and heat kernels I: compact Lie groups

    CERN Document Server

    Maher, David G

    2010-01-01

    An important object of study in harmonic analysis is the heat equation. On a Euclidean space, the fundamental solution of the associated semigroup is known as the heat kernel, which is also the law of Brownian motion. Similar statements also hold in the case of a Lie group. By using the wrapping map of Dooley and Wildberger, we show how to wrap a Brownian motion to a compact Lie group from its Lie algebra (viewed as a Euclidean space) and find the heat kernel. This is achieved by considering It\\^o type stochastic differential equations and applying the Feynman-Ka\\v{c} theorem.

  11. Interior-point algorithm based on general kernel function for monotone linear complementarity problem

    Institute of Scientific and Technical Information of China (English)

    LIU Yong; BAI Yan-qin

    2009-01-01

    A polynomial interior-point algorithm is presented for monotone linear complementarity problem (MLCP) based on:a class of kernel functions with the general barrier term, which are called general kernel functions. Under the mild conditions for the barrier term, the complexity bound of algorithm in terms of such kernel function and its derivatives is obtained. The approach is actually an extension of the existing work which only used the specific kernel functions for the MLCP.

  12. First numerical experiences with overlap fermions based on the Brillouin kernel

    CERN Document Server

    Durr, Stephan

    2016-01-01

    Numerical experiences are reported with overlap fermions which employ the Brillouin action as a kernel. After discussing the dispersion relations of both the kernel and the resulting chiral action, some of the physics features are addressed on quenched backgrounds. We find that the overlap with Brillouin kernel is much better localized than the overlap with Wilson kernel. Also a preliminary account is given of the cost of the formulation, in terms of CPU time and memory.

  13. A Kernel-Based Nonlinear Representor with Application to Eigenface Classification

    Institute of Scientific and Technical Information of China (English)

    ZHANG Jing; LIU Ben-yong; TAN Hao

    2004-01-01

    This paper presents a classifier named kernel-based nonlinear representor (KNR) for optimal representation of pattern features. Adopting the Gaussian kernel, with the kernel width adaptively estimated by a simple technique, it is applied to eigenface classification. Experimental results on the ORL face database show that it improves performance by around 6 points, in classification rate, over the Euclidean distance classifier.

  14. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Science.gov (United States)

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  15. The Explicit Computations of the Bergman Kernels on Generalized Hua Domains

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    @@The Bergman kernel function plays an important role in several complex variables. There exists the Bergman kernel function on any bounded domain in Cn. But we can get the Bergman kernel functions in explicit formulas for a few types of domains only, for instance, the bounded homogeneous domains and the egg domains in some cases.

  16. Discriminative kernel feature extraction and learning for object recognition and detection

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2015-01-01

    Feature extraction and learning is critical for object recognition and detection. By embedding context cue of image attributes into the kernel descriptors, we propose a set of novel kernel descriptors called context kernel descriptors (CKD). The motivation of CKD is to use the spatial consistency...

  17. Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael

    We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...... image of a page in the book 'hauksbok', which contains Icelandic sagas....

  18. System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques

    DEFF Research Database (Denmark)

    Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart;

    2014-01-01

    Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels...

  19. A Fast Approximation of the Weisfeiler-Lehman Graph Kernel for RDF Data

    NARCIS (Netherlands)

    de Vries, G.K.D.

    2013-01-01

    In this paper we introduce an approximation of the Weisfeiler-Lehman graph kernel algorithm aimed at improving the computation time of the kernel when applied to Resource Description Framework (RDF) data. Typically, applying graph kernels to RDF is done by extracting subgraphs from a large RDF graph

  20. Recent sea level change analysed with kernel EOF

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Andersen, Ole Baltazar; Knudsen, Per

    2009-01-01

    -2008. Preliminary analysis shows some interesting features related to climate change and particularly the pulsing of the El Niño/Southern Oscillation. Large scale ocean events associated with the El Niño/Southern Oscillation related signals are conveniently concentrated in the first SSH kernel EOF modes....

  1. Uniqueness Result in the Cauchy Dirichlet Problem via Mehler Kernel

    Science.gov (United States)

    Dhungana, Bishnu P.

    2014-09-01

    Using the Mehler kernel, a uniqueness theorem in the Cauchy Dirichlet problem for the Hermite heat equation with homogeneous Dirichlet boundary conditions on a class P of bounded functions U( x, t) with certain growth on U x ( x, t) is established.

  2. Characteristics of traditionally processed shea kernels and butter

    NARCIS (Netherlands)

    Honfo, G.F.; Linnemann, A.R.; Akissoe, N.; Soumanou, M.M.; Boekel, van M.A.J.S.

    2013-01-01

    The traditional production of shea butter requires a heat treatment of the nuts. This study compared the end products derived by two commonly used heat treatments, namely smoking and boiling followed by sun-drying. Neither treatment influenced the moisture content of the kernels (8–10%), but the boi

  3. Prediction of peptide bonding affinity: kernel methods for nonlinear modeling

    CERN Document Server

    Bergeron, Charles; Sundling, C Matthew; Krein, Michael; Katt, Bill; Sukumar, Nagamani; Breneman, Curt M; Bennett, Kristin P

    2011-01-01

    This paper presents regression models obtained from a process of blind prediction of peptide binding affinity from provided descriptors for several distinct datasets as part of the 2006 Comparative Evaluation of Prediction Algorithms (COEPRA) contest. This paper finds that kernel partial least squares, a nonlinear partial least squares (PLS) algorithm, outperforms PLS, and that the incorporation of transferable atom equivalent features improves predictive capability.

  4. Notes on a storage manager for the Clouds kernel

    Science.gov (United States)

    Pitts, David V.; Spafford, Eugene H.

    1986-01-01

    The Clouds project is research directed towards producing a reliable distributed computing system. The initial goal is to produce a kernel which provides a reliable environment with which a distributed operating system can be built. The Clouds kernal consists of a set of replicated subkernels, each of which runs on a machine in the Clouds system. Each subkernel is responsible for the management of resources on its machine; the subkernal components communicate to provide the cooperation necessary to meld the various machines into one kernel. The implementation of a kernel-level storage manager that supports reliability is documented. The storage manager is a part of each subkernel and maintains the secondary storage residing at each machine in the distributed system. In addition to providing the usual data transfer services, the storage manager ensures that data being stored survives machine and system crashes, and that the secondary storage of a failed machine is recovered (made consistent) automatically when the machine is restarted. Since the storage manager is part of the Clouds kernel, efficiency of operation is also a concern.

  5. Microwave moisture meter for in-shell peanut kernels

    Science.gov (United States)

    . A microwave moisture meter built with off-the-shelf components was developed, calibrated and tested in the laboratory and in the field for nondestructive and instantaneous in-shell peanut kernel moisture content determination from dielectric measurements on unshelled peanut pod samples. The meter ...

  6. Calculation of Volterra kernels for solutions of nonlinear differential equations

    NARCIS (Netherlands)

    van Hemmen, JL; Kistler, WM; Thomas, EGF

    2000-01-01

    We consider vector-valued autonomous differential equations of the form x' = f(x) + phi with analytic f and investigate the nonanticipative solution operator phi bar right arrow A(phi) in terms of its Volterra series. We show that Volterra kernels of order > 1 occurring in the series expansion of

  7. Corruption clubs: empirical evidence from kernel density estimates

    NARCIS (Netherlands)

    Herzfeld, T.; Weiss, Ch.

    2007-01-01

    A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to

  8. Fast O(1) bilateral filtering using trigonometric range kernels

    CERN Document Server

    Chaudhury, Kunal Narayan; Unser, Michael

    2011-01-01

    It is well-known that spatial averaging can be realized (in space or frequency domain) using algorithms whose complexity does not depend on the size or shape of the filter. These fast algorithms are generally referred to as constant-time or O(1) algorithms in the image processing literature. Along with the spatial filter, the edge-preserving bilateral filter [bilateralFilter] involves an additional range kernel. This is used to restrict the averaging to those neighborhood pixels whose intensity are similar or close to that of the pixel of interest. The range kernel operates by acting on the pixel intensities. This makes the averaging process non-linear and computationally intensive, especially when the spatial filter is large. In this paper, we show how the O(1) averaging algorithms can be leveraged for realizing the bilateral filter in constant-time, by using trigonometric range kernels. This is done by generalizing the idea in [bilateralFilter_fast] of using polynomial range kernels. The class of trigonomet...

  9. Music emotion detection using hierarchical sparse kernel machines.

    Science.gov (United States)

    Chin, Yu-Hao; Lin, Chang-Hong; Siahaan, Ernestasia; Wang, Jia-Ching

    2014-01-01

    For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA) is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target) side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM) with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET) curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.

  10. Corruption clubs: empirical evidence from kernel density estimates

    NARCIS (Netherlands)

    Herzfeld, T.; Weiss, Ch.

    2007-01-01

    A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to

  11. Bergman kernel function on Hua Construction of the second type

    Institute of Scientific and Technical Information of China (English)

    ZHANG Liyou

    2005-01-01

    In this paper, we give an explicit formula of the Bergman kernel function on Hua Construction of the second type when the parameters 1/p1,…, 1/pr-1 are positive integers and 1/pr is an arbitrary positive real number.

  12. Predicting disease trait with genomic data: a composite kernel approach.

    Science.gov (United States)

    Yang, Haitao; Li, Shaoyu; Cao, Hongyan; Zhang, Chichen; Cui, Yuehua

    2016-06-02

    With the advancement of biotechniques, a vast amount of genomic data is generated with no limit. Predicting a disease trait based on these data offers a cost-effective and time-efficient way for early disease screening. Here we proposed a composite kernel partial least squares (CKPLS) regression model for quantitative disease trait prediction focusing on genomic data. It can efficiently capture nonlinear relationships among features compared with linear learning algorithms such as Least Absolute Shrinkage and Selection Operator or ridge regression. We proposed to optimize the kernel parameters and kernel weights with the genetic algorithm (GA). In addition to improved performance for parameter optimization, the proposed GA-CKPLS approach also has better learning capacity and generalization ability compared with single kernel-based KPLS method as well as other nonlinear prediction models such as the support vector regression. Extensive simulation studies demonstrated that GA-CKPLS had better prediction performance than its counterparts under different scenarios. The utility of the method was further demonstrated through two case studies. Our method provides an efficient quantitative platform for disease trait prediction based on increasing volume of omics data.

  13. An Adaptive Genetic Association Test Using Double Kernel Machines.

    Science.gov (United States)

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  14. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    Science.gov (United States)

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  15. Online multiple kernel similarity learning for visual search.

    Science.gov (United States)

    Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin

    2014-03-01

    Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly.

  16. Regularized Kernel Forms of Minimum Squared Error Method

    Institute of Scientific and Technical Information of China (English)

    XU Jian-hua; ZHANG Xue-gong; LI Yan-da

    2006-01-01

    Minimum squared error (MSE) algorithm is one of the classical pattern recognition and regression analysis methods,whose objective is to minimize the squared error summation between the output of linear function and the desired output.In this paper,the MSE algorithm is modified by using kernel functions satisfying the Mercer condition and regularization technique; and the nonlinear MSE algorithms based on kernels and regularization term,that is,the regularized kernel forms of MSE algorithm,are proposed.Their objective functions include the squared error summation between the output of nonlinear function based on kernels and the desired output and a proper regularization term.The regularization technique can handle ill-posed problems,reduce the solution space,and control the generalization.Three squared regularization terms are utilized in this paper.In accordance with the probabilistic interpretation of regularization terms,the difference among three regularization terms is given in detail.The synthetic and real data are used to analyze the algorithm performance.

  17. Calculation of Volterra kernels for solutions of nonlinear differential equations

    NARCIS (Netherlands)

    van Hemmen, JL; Kistler, WM; Thomas, EGF

    2000-01-01

    We consider vector-valued autonomous differential equations of the form x' = f(x) + phi with analytic f and investigate the nonanticipative solution operator phi bar right arrow A(phi) in terms of its Volterra series. We show that Volterra kernels of order > 1 occurring in the series expansion of th

  18. Viscoelastic behavior of maize kernel studied by dynamic mechanical analyzer.

    Science.gov (United States)

    Sheng, Shao-Yang; Wang, Li-Jun; Li, Dong; Mao, Zhi-Huai; Adhikari, Benu

    2014-11-04

    The creep recovery, stress relaxation, temperature-dependence and their frequency-dependence of maize kernel were determined within a moisture content range of 11.9% to 25.9% (w/w) by using a dynamic mechanical analyzer. The 4-element Burgers model was found to adequately represent the creep behavior of the maize seeds (R(2)>0.97). The 5-element Maxwell model was able to better predict the stress relaxation behavior of maize kernel than the 3-element Maxwell model. The Tg values for the maize kernels decreased with increased moisture content. For example, the Tg values were 114 °C and 65 °C at moisture content values of 11.9% (w/w) and 25.9% (w/w), respectively. The magnitude of the loss moduli and loss tangent and their rate of change with frequency were highest at 20.7% and lowest at 11.9% moisture contents. The maize kernel structure exhibited A-type crystalline pattern and the microstructure was found to expand with increase in moisture content.

  19. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  20. A novel kernel regularized nonhomogeneous grey model and its applications

    Science.gov (United States)

    Ma, Xin; Hu, Yi-sheng; Liu, Zhi-bin

    2017-07-01

    The nonhomogeneous grey model (NGM) is a novel tool for time series forecasting, which has attracted considerable interest of research. However, the existing nonhomogeneous grey models may be inefficient to predict the complex nonlinear time series sometimes due to the linearity of the differential or difference equations based on which these models are developed. In order to enhance the accuracy and applicability of the NGM model, the kernel method in the statistical learning theory has been utilized to build a novel kernel regularized nonhomogeneous grey model, which is abbreviated as the KRNGM model. The KRNGM model is represented by a differential equation which contains a nonlinear function of t. By constructing the regularized problem and using the kernel function which satisfies the Mercer's condition, the parameters estimation of KRNGM model only involves in solving a set of linear equations, and the nonlinear function in the KRNGM model can be expressed as a linear combination of the Lagrangian multipliers and the selected kernel function, and then the KRNGM model can be solved numerically. Two case studies of petroleum production forecasting are carried to illustrate the effectiveness of the KRNGM model, comparing to the existing nonhomogeneous models. The results show that the KRNGM model outperforms the existing NGM, ONGM, NDGM model significantly.

  1. Kernel-Based Least Squares Temporal Difference With Gradient Correction.

    Science.gov (United States)

    Song, Tianheng; Li, Dazi; Cao, Liulin; Hirasawa, Kotaro

    2016-04-01

    A least squares temporal difference with gradient correction (LS-TDC) algorithm and its kernel-based version kernel-based LS-TDC (KLS-TDC) are proposed as policy evaluation algorithms for reinforcement learning (RL). LS-TDC is derived from the TDC algorithm. Attributed to TDC derived by minimizing the mean-square projected Bellman error, LS-TDC has better convergence performance. The least squares technique is used to omit the size-step tuning of the original TDC and enhance robustness. For KLS-TDC, since the kernel method is used, feature vectors can be selected automatically. The approximate linear dependence analysis is performed to realize kernel sparsification. In addition, a policy iteration strategy motivated by KLS-TDC is constructed to solve control learning problems. The convergence and parameter sensitivities of both LS-TDC and KLS-TDC are tested through on-policy learning, off-policy learning, and control learning problems. Experimental results, as compared with a series of corresponding RL algorithms, demonstrate that both LS-TDC and KLS-TDC have better approximation and convergence performance, higher efficiency for sample usage, smaller burden of parameter tuning, and less sensitivity to parameters.

  2. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm is te...

  3. Visual category recognition using Spectral Regression and Kernel Discriminant Analysis

    NARCIS (Netherlands)

    Tahir, M.A.; Kittler, J.; Mikolajczyk, K.; Yan, F.; van de Sande, K.E.A.; Gevers, T.

    2009-01-01

    Visual category recognition (VCR) is one of the most important tasks in image and video indexing. Spectral methods have recently emerged as a powerful tool for dimensionality reduction and manifold learning. Recently, Spectral Regression combined with Kernel Discriminant Analysis (SR-KDA) has been s

  4. Tensorial Kernel Principal Component Analysis for Action Recognition

    Directory of Open Access Journals (Sweden)

    Cong Liu

    2013-01-01

    Full Text Available We propose the Tensorial Kernel Principal Component Analysis (TKPCA for dimensionality reduction and feature extraction from tensor objects, which extends the conventional Principal Component Analysis (PCA in two perspectives: working directly with multidimensional data (tensors in their native state and generalizing an existing linear technique to its nonlinear version by applying the kernel trick. Our method aims to remedy the shortcomings of multilinear subspace learning (tensorial PCA developed recently in modelling the nonlinear manifold of tensor objects and brings together the desirable properties of kernel methods and tensor decompositions for significant performance gain when the data are multidimensional and nonlinear dependencies do exist. Our approach begins by formulating TKPCA as an optimization problem. Then, we develop a kernel function based on Grassmann Manifold that can directly take tensorial representation as parameters instead of traditional vectorized representation. Furthermore, a TKPCA-based tensor object recognition is also proposed for application of the action recognition. Experiments with real action datasets show that the proposed method is insensitive to both noise and occlusion and performs well compared with state-of-the-art algorithms.

  5. Phase space formalisms of quantum mechanics with singular kernel

    CERN Document Server

    Sala, P R; Muga, J G

    1997-01-01

    The equivalence of the Rivier-Margenau-Hill and Born-Jordan-Shankara phase space formalisms to the conventional operator approach of quantum mechanics is demonstrated. It is shown that in spite of the presence of singular kernels the mappings relating phase space functions and operators back and forth are possible.

  6. The effect of apricot kernel flour incorporation on the ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-01-05

    Jan 5, 2009 ... noodles were examined by adding apricot kernel flour (AKF) to the noodle formulation at the .... Moisture and protein content (N%x5.7) of wheat flour was deter- .... relatively low level of hydrophilic compounds (Manthey et.

  7. Wrapping Brownian motion and heat kernels II: symmetric spaces

    CERN Document Server

    Maher, David G

    2010-01-01

    In this paper we extend our previous results on wrapping Brownian motion and heat kernels onto compact Lie groups to various symmetric spaces, where a global generalisation of Rouvi\\`ere's formula and the $e$-function are considered. Additionally, we extend some of our results to complex Lie groups, and certain non-compact symmetric spaces.

  8. Acute and subchronic toxicity studies of kernel extract of Sclerocarya ...

    African Journals Online (AJOL)

    Administrator

    evaluation was done by oral feeding of the rats with the seed kernel extract daily at doses .... Significantly different from the control (P < 0.05) using one way analysis of variance. Subchronic ... acute toxicity did not produce any grossly negative behavioural changes such .... against garlic-induced oxidative stress. Journal of ...

  9. Evolutionary Optimization of Kernel Weights Improves Protein Complex Comembership Prediction

    NARCIS (Netherlands)

    Hulsman, M.; Reinders, M.J.T.; De Ridder, D.

    2008-01-01

    In recent years, more and more high-throughput data sources useful for protein complex prediction have become available (e.g., gene sequence, mRNA expression, and interactions). The integration of these different data sources can be challenging. Recently, it has been recognized that kernel-based cla

  10. Deep sequencing of RNA from ancient maize kernels

    DEFF Research Database (Denmark)

    Fordyce, Sarah Louise; Avila Arcos, Maria del Carmen; Rasmussen, Morten;

    2013-01-01

    The characterization of biomolecules from ancient samples can shed otherwise unobtainable insights into the past. Despite the fundamental role of transcriptomal change in evolution, the potential of ancient RNA remains unexploited - perhaps due to dogma associated with the fragility of RNA. We...... maize kernels. The results suggest that ancient seed transcriptomics may offer a powerful new tool with which to study plant domestication....

  11. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    Science.gov (United States)

    2006-01-01

    successfully used by the machine learning community for pattern recognition and image denoising [14]. A Gaussian kernel was used by Cremers et al. [8] for...matrix M, where φi ∈ RNd . Using Singular Value Decomposition ( SVD ), the covariance matrix 1nMM T is decomposed as: UΣUT = 1 n MMT (1) where U is a

  12. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Science.gov (United States)

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template.

  13. Kernel-based least squares policy iteration for reinforcement learning.

    Science.gov (United States)

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  14. Compressive Strength of Concrete Containing Palm Kernel Shell Ash

    Directory of Open Access Journals (Sweden)

    FADELE Oluwadamilola A

    2016-12-01

    Full Text Available This study examined the influence of varying palm kernel shell ash content, as supplementary cementitious material (SCM at specified water/cement ratios and curing ages on the compressive strength of concrete cubes samples. Palm kernel shell ash was used as a partial replacement for ordinary Portland cement (OPC up to 30% at 5% intervals using mix ratio 1:2:4. River sand with particles passing 4.75mmBS sieve and crushed aggregate of 20mm maximum size were used while the palm kernel shell ash used was ofparticles passing through 212μm BS sieve. The compressive strength of the test cubes (100mm were tested at 5 different curing ages of 3, 7, 14, 28 and 56 days. The result showed that test cube containing Palm kernel shell ash gained strength over a longer curing period compared with ordinary Portlandcement concrete samples and the strength varies with percentagePKSAcontent in the cube samples. The results showed that at 28 days test cubes containing 5%, 10%, 15%, 20%, 25% and 30% PKSA content achieved compressive strength of 26.1 MPa, 22.53MPa, 19.43 MPa, 20.43 MPa, 16.97 MPa and 16.5MPa compared to 29MPa of Ordinary Portland cement concrete cubes. It was concluded that for structural concrete works requiring a characteristic strength of 25Mpa,5% palm kernel shell ash can effectively replace ordinary Portland cement while up to 15% PKSA content can be used for concrete works requiring 20Mpa strength at 28 days

  15. A kernel for open source drug discovery in tropical diseases.

    Directory of Open Access Journals (Sweden)

    Leticia Ortí

    Full Text Available BACKGROUND: Conventional patent-based drug development incentives work badly for the developing world, where commercial markets are usually small to non-existent. For this reason, the past decade has seen extensive experimentation with alternative R&D institutions ranging from private-public partnerships to development prizes. Despite extensive discussion, however, one of the most promising avenues-open source drug discovery-has remained elusive. We argue that the stumbling block has been the absence of a critical mass of preexisting work that volunteers can improve through a series of granular contributions. Historically, open source software collaborations have almost never succeeded without such "kernels". METHODOLOGY/PRINCIPAL FINDINGS: HERE, WE USE A COMPUTATIONAL PIPELINE FOR: (i comparative structure modeling of target proteins, (ii predicting the localization of ligand binding sites on their surfaces, and (iii assessing the similarity of the predicted ligands to known drugs. Our kernel currently contains 143 and 297 protein targets from ten pathogen genomes that are predicted to bind a known drug or a molecule similar to a known drug, respectively. The kernel provides a source of potential drug targets and drug candidates around which an online open source community can nucleate. Using NMR spectroscopy, we have experimentally tested our predictions for two of these targets, confirming one and invalidating the other. CONCLUSIONS/SIGNIFICANCE: The TDI kernel, which is being offered under the Creative Commons attribution share-alike license for free and unrestricted use, can be accessed on the World Wide Web at http://www.tropicaldisease.org. We hope that the kernel will facilitate collaborative efforts towards the discovery of new drugs against parasites that cause tropical diseases.

  16. Choosing parameters of kernel subspace LDA for recognition of face images under pose and illumination variations.

    Science.gov (United States)

    Huang, Jian; Yuen, Pong C; Chen, Wen-Sheng; Lai, Jian Huang

    2007-08-01

    This paper addresses the problem of automatically tuning multiple kernel parameters for the kernel-based linear discriminant analysis (LDA) method. The kernel approach has been proposed to solve face recognition problems under complex distribution by mapping the input space to a high-dimensional feature space. Some recognition algorithms such as the kernel principal components analysis, kernel Fisher discriminant, generalized discriminant analysis, and kernel direct LDA have been developed in the last five years. The experimental results show that the kernel-based method is a good and feasible approach to tackle the pose and illumination variations. One of the crucial factors in the kernel approach is the selection of kernel parameters, which highly affects the generalization capability and stability of the kernel-based learning methods. In view of this, we propose an eigenvalue-stability-bounded margin maximization (ESBMM) algorithm to automatically tune the multiple parameters of the Gaussian radial basis function kernel for the kernel subspace LDA (KSLDA) method, which is developed based on our previously developed subspace LDA method. The ESBMM algorithm improves the generalization capability of the kernel-based LDA method by maximizing the margin maximization criterion while maintaining the eigenvalue stability of the kernel-based LDA method. An in-depth investigation on the generalization performance on pose and illumination dimensions is performed using the YaleB and CMU PIE databases. The FERET database is also used for benchmark evaluation. Compared with the existing PCA-based and LDA-based methods, our proposed KSLDA method, with the ESBMM kernel parameter estimation algorithm, gives superior performance.

  17. 支持动态策略的安全核(Security Kernel)机制的研究%Research of Security Kernel Mechanism Supporting Dynamical Policies

    Institute of Scientific and Technical Information of China (English)

    吴新勇; 熊光泽

    2002-01-01

    Security of information system requires a secure operation system. Security kernel meets the requirement and provides a bedrock to security of operation system. This paper extracts the deficiency of traditional security kernel, presents a security kernel mechanism supporting policy flexibility, simplified secure interface. It optimizes the performance by reused policy cache, provids a method to revoke granted permissions and assures the atomicity of revocation permissions and granting new permissions. As a result, all refinements help security kernel to improve its flexibility, extensibility and portability.

  18. A Randomized Heuristic for Kernel Parameter Selection with Large-scale Multi-class Data

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2011-01-01

    Over the past few years kernel methods have gained a tremendous amount of attention as existing linear algorithms can easily be extended to account for highly non-linear data in a computationally efficient manner. Unfortunately most kernels require careful tuning of intrinsic parameters to correc......Over the past few years kernel methods have gained a tremendous amount of attention as existing linear algorithms can easily be extended to account for highly non-linear data in a computationally efficient manner. Unfortunately most kernels require careful tuning of intrinsic parameters....... In this contribution we investigate a novel randomized approach for kernel parameter selection in large-scale multi-class data. We fit a minimum enclosing ball to the class means in Reproducing Kernel Hilbert Spaces (RKHS), and use the radius as a quality measure of the space, defined by the kernel parameter. We apply...

  19. Optimizing Kernel PCA Using Sparse Representation-Based Classifier for MSTAR SAR Image Target Recognition

    Directory of Open Access Journals (Sweden)

    Chuang Lin

    2013-01-01

    Full Text Available Different kernels cause various class discriminations owing to their different geometrical structures of the data in the feature space. In this paper, a method of kernel optimization by maximizing a measure of class separability in the empirical feature space with sparse representation-based classifier (SRC is proposed to solve the problem of automatically choosing kernel functions and their parameters in kernel learning. The proposed method first adopts a so-called data-dependent kernel to generate an efficient kernel optimization algorithm. Then, a constrained optimization function using general gradient descent method is created to find combination coefficients varied with the input data. After that, optimized kernel PCA (KOPCA is obtained via combination coefficients to extract features. Finally, the sparse representation-based classifier is used to perform pattern classification task. Experimental results on MSTAR SAR images show the effectiveness of the proposed method.

  20. Defective kernel mutants of maize. I. Genetic and lethality studies.

    Science.gov (United States)

    Neuffer, M G; Sheridan, W F

    1980-08-01

    A planting of 3,919 M(1) kernels from normal ears crossed by EMS-treated pollen produced 3,461 M(1) plants and 3,172 selfed ears. These plants yielded 2,477 (72%) total heritable changes; the selfed ears yielded 2,457 (78%) recessive mutants, including 855 (27%) recessive kernel mutants and 8 (0.23%) viable dominant mutants. The ratio of recessive to dominant mutants was 201:1. The average mutation frequency for four known loci was three per 3,172 genomes analyzed. The estimated total number of loci mutated was 535 and the estimated number of kernel mutant loci mutated was 285. Among the 855 kernel mutants, 432 had a nonviable embryo, and 59 germinated but had a lethal seedling. A sample of 194 of the latter two types was tested for heritability, lethality, chromosome arm location and endosperm-embryo interaction between mutant and nonmutant tissues in special hyper-hypoploid combinations produced by manipulation of B-A translocations. The selected 194 mutants were characterized and catalogued according to endosperm phenotype and investigated to determine their effects on the morphology and development of the associated embryo. The possibility of rescuing some of the lethal mutants by covering the mutant embryo with a normal endosperm was investigated. Ninety of these 194 mutants were located on 17 of the 18 chromosome arms tested. Nineteen of the located mutants were examined to determine the effect of having a normal embryo in the same kernel with a mutant endosperm, and vice versa, as compared to the expression observed in kernels with both embryo and endosperm in a mutant condition. In the first situation, for three of the 19 mutants, the mutant endosperm was less extreme (the embryo helped); for seven cases, the mutant endosperm was more extreme (the embryo hindered); and for nine cases, there was no change. In the reverse situation, for four cases the normal endosperm helped the mutant embryo; for 14 cases there was no change and one case was inconclusive.

  1. Memory Kernel in the Expertise of Chess Players

    CERN Document Server

    Schaigorodsky, Ana L; Billoni, Orlando V

    2015-01-01

    In this work we investigate a mechanism for the emergence of long-range time correlations observed in a chronologically ordered database of chess games. We analyze a modified Yule-Simon preferential growth process proposed by Cattuto et al., which includes memory effects by means of a probabilistic kernel. According to the Hurst exponent of different constructed time series from the record of games, artificially generated databases from the model exhibit similar long-range correlations. In addition, the inter-event time frequency distribution is well reproduced by the model for realistic parameter values. In particular, we find the inter-event time distribution properties to be correlated with the expertise of the chess players through the memory kernel extension. Our work provides new information about the strategies implemented by players with different levels of expertise, showing an interesting example of how popularities and long-range correlations build together during a collective learning process.

  2. Hydroxocobalamin treatment of acute cyanide poisoning from apricot kernels

    Science.gov (United States)

    Cigolini, Davide; Ricci, Giogio; Zannoni, Massimo; Codogni, Rosalia; De Luca, Manuela; Perfetti, Paola; Rocca, Giampaolo

    2011-01-01

    Clinical experience with hydroxocobalamin in acute cyanide poisoning via ingestion remains limited. This case concerns a 35-year-old mentally ill woman who consumed more than 20 apricot kernels. Published literature suggests each kernel would have contained cyanide concentrations ranging from 0.122 to 4.09 mg/g (average 2.92 mg/g). On arrival, the woman appeared asymptomatic with a raised pulse rate and slight metabolic acidosis. Forty minutes after admission (approximately 70 min postingestion), the patient experienced headache, nausea and dyspnoea, and was hypotensive, hypoxic and tachypnoeic. Following treatment with amyl nitrite and sodium thiosulphate, her methaemoglobin level was 10%. This prompted the administration of oxygen, which evoked a slight improvement in her vital signs. Hydroxocobalamin was then administered. After 24 h, she was completely asymptomatic with normalised blood pressure and other haemodynamic parameters. This case reinforces the safety and effectiveness of hydroxocobalamin in acute cyanide poisoning by ingestion. PMID:22694886

  3. Complete parameterization of piecewise-polynomial interpolation kernels.

    Science.gov (United States)

    Blu, Thierry; Thévenaz, Philippe; Unser, Michael

    2003-01-01

    Every now and then, a new design of an interpolation kernel appears in the literature. While interesting results have emerged, the traditional design methodology proves laborious and is riddled with very large systems of linear equations that must be solved analytically. We propose to ease this burden by providing an explicit formula that can generate every possible piecewise-polynomial kernel given its degree, its support, its regularity, and its order of approximation. This formula contains a set of coefficients that can be chosen freely and do not interfere with the four main design parameters; it is thus easy to tune the design to achieve any additional constraints that the designer may care for.

  4. Kernel-based fisher discriminant analysis for hyperspectral target detection

    Institute of Scientific and Technical Information of China (English)

    GU Yan-feng; ZHANG Ye; YOU Di

    2007-01-01

    A new method based on kernel Fisher discriminant analysis (KFDA) is proposed for target detection of hyperspectral images. The KFDA combines kernel mapping derived from support vector machine and the classical linear Fisher discriminant analysis (LFDA), and it possesses good ability to process nonlinear data such as hyperspectral images. According to the Fisher rule that the ratio of the between-class and within-class scatters is maximized, the KFDA is used to obtain a set of optimal discriminant basis vectors in high dimensional feature space. All pixels in the hyperspectral images are projected onto the discriminant basis vectors and the target detection is performed according to the projection result. The numerical experiments are performed on hyperspectral data with 126 bands collected by Airborne Visible/Infrared Imaging Spectrometer (AVIRIS). The experimental results show the effectiveness of the proposed detection method and prove that this method has good ability to overcome small sample size and spectral variability in the hyperspectral target detection.

  5. Robust visual tracking via adaptive kernelized correlation filter

    Science.gov (United States)

    Wang, Bo; Wang, Desheng; Liao, Qingmin

    2016-10-01

    Correlation filter based trackers have proved to be very efficient and robust in object tracking with a notable performance competitive with state-of-art trackers. In this paper, we propose a novel object tracking method named Adaptive Kernelized Correlation Filter (AKCF) via incorporating Kernelized Correlation Filter (KCF) with Structured Output Support Vector Machines (SOSVM) learning method in a collaborative and adaptive way, which can effectively handle severe object appearance changes with low computational cost. AKCF works by dynamically adjusting the learning rate of KCF and reversely verifies the intermediate tracking result by adopting online SOSVM classifier. Meanwhile, we bring Color Names in this formulation to effectively boost the performance owing to its rich feature information encoded. Experimental results on several challenging benchmark datasets reveal that our approach outperforms numerous state-of-art trackers.

  6. Kernel conditional quantile estimator under left truncation for functional regressors

    Directory of Open Access Journals (Sweden)

    Nacéra Helal

    2016-01-01

    Full Text Available Let \\(Y\\ be a random real response which is subject to left-truncation by another random variable \\(T\\. In this paper, we study the kernel conditional quantile estimation when the covariable \\(X\\ takes values in an infinite-dimensional space. A kernel conditional quantile estimator is given under some regularity conditions, among which in the small-ball probability, its strong uniform almost sure convergence rate is established. Some special cases have been studied to show how our work extends some results given in the literature. Simulations are drawn to lend further support to our theoretical results and assess the behavior of the estimator for finite samples with different rates of truncation and sizes.

  7. Kernel PCA for HMM-Based Cursive Handwriting Recognition

    Science.gov (United States)

    Fischer, Andreas; Bunke, Horst

    In this paper, we propose Kernel Principal Component Analysis as a feature selection method for offline cursive handwriting recognition based on Hidden Markov Models. In contrast to formerly used feature selection methods, namely standard Principal Component Analysis and Independent Component Analysis, nonlinearity is achieved by making use of a radial basis function kernel. In an experimental study we demonstrate that the proposed nonlinear method has a great potential to improve cursive handwriting recognition systems and is able to significantly outperform linear feature selection methods. We consider two diverse datasets of isolated handwritten words for the experimental evaluation, the first consisting of modern English words, and the second consisting of medieval Middle High German words.

  8. Packet Classification using Support Vector Machines with String Kernels

    Directory of Open Access Journals (Sweden)

    Sarthak Munshi

    2016-08-01

    Full Text Available Since the inception of internet many methods have been devised to keep untrusted and malicious packets away from a user’s system . The traffic / packet classification can be used as an important tool to detect intrusion in the system. Using Machine Learning as an efficient statistical based approach for classifying packets is a novel method in practice today . This paper emphasizes upon using an advanced string kernel method within a support vector machine to classify packets .There exists a paper related to a similar problem using Machine Learning [2]. But the researches mentioned in their paper are not up-to date and doesn’t account for modern day string kernels that are much more efficient . My work extends their research by introducing different approaches to classify encrypted / unencrypted traffic / packets .

  9. NONLINEAR DATA RECONCILIATION METHOD BASED ON KERNEL PRINCIPAL COMPONENT ANALYSIS

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    In the industrial process situation, principal component analysis (PCA) is a general method in data reconciliation.However, PCA sometime is unfeasible to nonlinear feature analysis and limited in application to nonlinear industrial process.Kernel PCA (KPCA) is extension of PCA and can be used for nonlinear feature analysis.A nonlinear data reconciliation method based on KPCA is proposed.The basic idea of this method is that firstly original data are mapped to high dimensional feature space by nonlinear function, and PCA is implemented in the feature space.Then nonlinear feature analysis is implemented and data are reconstructed by using the kernel.The data reconciliation method based on KPCA is applied to ternary distillation column.Simulation results show that this method can filter the noise in measurements of nonlinear process and reconciliated data can represent the true information of nonlinear process.

  10. Segmented gray-code kernels for fast pattern matching.

    Science.gov (United States)

    Ouyang, Wanli; Zhang, Renqi; Cham, Wai-Kuen

    2013-04-01

    The gray-code kernels (GCK) family, which has Walsh Hadamard transform on sliding windows as a member, is a family of kernels that can perform image analysis efficiently using a fast algorithm, such as the GCK algorithm. The GCK has been successfully used for pattern matching. In this paper, we propose that the G4-GCK algorithm is more efficient than the previous algorithm in computing GCK. The G4-GCK algorithm requires four additions per pixel for three basis vectors independent of transform size and dimension. Based on the G4-GCK algorithm, we then propose the segmented GCK. By segmenting input data into L(s) parts, the SegGCK requires only four additions per pixel for 3L(s) basis vectors. Experimental results show that the proposed algorithm can significantly accelerate the full-search equivalent pattern matching process and outperforms state-of-the-art methods.

  11. Human Gait Recognition Based on Kernel PCA Using Projections

    Institute of Scientific and Technical Information of China (English)

    Murat Ekinci; Murat Aykut

    2007-01-01

    This paper presents a novel approach for human identification at a distance using gait recognition. Recog- nition of a person from their gait is a biometric of increasing interest. The proposed work introduces a nonlinear machine learning method, kernel Principal Component Analysis (PCA), to extract gait features from silhouettes for individual recognition. Binarized silhouette of a motion object is first represented by four 1-D signals which are the basic image features called the distance vectors. Fourier transform is performed to achieve translation invariant for the gait patterns accumulated from silhouette sequences which are extracted from different circumstances. Kernel PCA is then used to extract higher order relations among the gait patterns for future recognition. A fusion strategy is finally executed to produce a final decision. The experiments are carried out on the CMU and the USF gait databases and presented based on the different training gait cycles.

  12. Some physical properties of ginkgo nuts and kernels

    Science.gov (United States)

    Ch'ng, P. E.; Abdullah, M. H. R. O.; Mathai, E. J.; Yunus, N. A.

    2013-12-01

    Some data of the physical properties of ginkgo nuts at a moisture content of 45.53% (±2.07) (wet basis) and of their kernels at 60.13% (± 2.00) (wet basis) are presented in this paper. It consists of the estimation of the mean length, width, thickness, the geometric mean diameter, sphericity, aspect ratio, unit mass, surface area, volume, true density, bulk density, and porosity measures. The coefficient of static friction for nuts and kernels was determined by using plywood, glass, rubber, and galvanized steel sheet. The data are essential in the field of food engineering especially dealing with design and development of machines, and equipment for processing and handling agriculture products.

  13. Consistency of the group Lasso and multiple kernel learning

    CERN Document Server

    Bach, Francis

    2007-01-01

    We consider the least-square regression problem with regularization by a block 1-norm, i.e., a sum of Euclidean norms over spaces of dimensions larger than one. This problem, referred to as the group Lasso, extends the usual regularization by the 1-norm where all spaces have dimension one, where it is commonly referred to as the Lasso. In this paper, we study the asymptotic model consistency of the group Lasso. We derive necessary and sufficient conditions for the consistency of group Lasso under practical assumptions, such as model misspecification. When the linear predictors and Euclidean norms are replaced by functions and reproducing kernel Hilbert norms, the problem is usually referred to as multiple kernel learning and is commonly used for learning from heterogeneous data sources and for non linear variable selection. Using tools from functional analysis, and in particular covariance operators, we extend the consistency results to this infinite dimensional case and also propose an adaptive scheme to obt...

  14. Hardness methods for testing maize kernels.

    Science.gov (United States)

    Fox, Glen; Manley, Marena

    2009-07-08

    Maize is a highly important crop to many countries around the world, through the sale of the maize crop to domestic processors and subsequent production of maize products and also provides a staple food to subsistance farms in undeveloped countries. In many countries, there have been long-term research efforts to develop a suitable hardness method that could assist the maize industry in improving efficiency in processing as well as possibly providing a quality specification for maize growers, which could attract a premium. This paper focuses specifically on hardness and reviews a number of methodologies as well as important biochemical aspects of maize that contribute to maize hardness used internationally. Numerous foods are produced from maize, and hardness has been described as having an impact on food quality. However, the basis of hardness and measurement of hardness are very general and would apply to any use of maize from any country. From the published literature, it would appear that one of the simpler methods used to measure hardness is a grinding step followed by a sieving step, using multiple sieve sizes. This would allow the range in hardness within a sample as well as average particle size and/or coarse/fine ratio to be calculated. Any of these parameters could easily be used as reference values for the development of near-infrared (NIR) spectroscopy calibrations. The development of precise NIR calibrations will provide an excellent tool for breeders, handlers, and processors to deliver specific cultivars in the case of growers and bulk loads in the case of handlers, thereby ensuring the most efficient use of maize by domestic and international processors. This paper also considers previous research describing the biochemical aspects of maize that have been related to maize hardness. Both starch and protein affect hardness, with most research focusing on the storage proteins (zeins). Both the content and composition of the zein fractions affect

  15. Biological control of Aspergillus flavus growth and subsequent ...

    African Journals Online (AJOL)

    ONOS

    2010-07-05

    Jul 5, 2010 ... aflatoxin contamination of developing corn kernels. In: Diener UL,. Asquith RL, Dickens JW, ... Saccharomyces cerevisiae and lactic acid bacteria as potential mycotoxin decontaminating agents. Trends. Food Sci. Technol.

  16. Music Emotion Detection Using Hierarchical Sparse Kernel Machines

    Directory of Open Access Journals (Sweden)

    Yu-Hao Chin

    2014-01-01

    Full Text Available For music emotion detection, this paper presents a music emotion verification system based on hierarchical sparse kernel machines. With the proposed system, we intend to verify if a music clip possesses happiness emotion or not. There are two levels in the hierarchical sparse kernel machines. In the first level, a set of acoustical features are extracted, and principle component analysis (PCA is implemented to reduce the dimension. The acoustical features are utilized to generate the first-level decision vector, which is a vector with each element being a significant value of an emotion. The significant values of eight main emotional classes are utilized in this paper. To calculate the significant value of an emotion, we construct its 2-class SVM with calm emotion as the global (non-target side of the SVM. The probability distributions of the adopted acoustical features are calculated and the probability product kernel is applied in the first-level SVMs to obtain first-level decision vector feature. In the second level of the hierarchical system, we merely construct a 2-class relevance vector machine (RVM with happiness as the target side and other emotions as the background side of the RVM. The first-level decision vector is used as the feature with conventional radial basis function kernel. The happiness verification threshold is built on the probability value. In the experimental results, the detection error tradeoff (DET curve shows that the proposed system has a good performance on verifying if a music clip reveals happiness emotion.

  17. Bounded symbols and reproducing kernel thesis for truncated Toeplitz operators

    CERN Document Server

    Baranov, A; Fricain, Emmanuel; Mashreghi, Javad; Timotin, Dan

    2009-01-01

    Compressions of Toeplitz operators to coinvariant subspaces of $H^2$ are called \\emph{truncated Toeplitz operators}. We study two questions related to these operators. The first, raised by Sarason, is whether boundedness of the operator implies the existence of a bounded symbol; the second is the reproducing kernel thesis. We show that in general the answer to the first question is negative, and we exhibit some classes of spaces for which the answers to both questions are positive.

  18. A method to measure the rice kernel chalkiness objectively

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    @@ Rice kernel chalkiness is an important quality character. Being the untransparent portions in grain endosperm, chalkiness is always measured by some subjective eye-judging methods domestically and internationally. Results measured by such methods are subjective, inaccurate, and unstable. This research is intended to establish an intact and efficient chalkiness measuring system that is composed of Chalkiness 1.0 software, a computer, and a scanner or a digital camera.

  19. Instantaneous Bethe-Salpeter Kernel for the Lightest Pseudoscalar Mesons

    CERN Document Server

    Lucha, Wolfgang

    2016-01-01

    Starting from a phenomenologically successful, numerical solution of the Dyson-Schwinger equation that governs the quark propagator, we reconstruct in detail the interaction kernel that has to enter the instantaneous approximation to the Bethe-Salpeter equation to allow us to describe the lightest pseudoscalar mesons as quark-antiquark bound states exhibiting the (almost) masslessness necessary for them to be interpretable as the (pseudo) Goldstone bosons related to the spontaneous chiral symmetry breaking of quantum chromodynamics.

  20. Extreme values and kernel estimates of point processes boundaries

    CERN Document Server

    Girard, Stéphane

    2011-01-01

    We present a method for estimating the edge of a two-dimensional bounded set, given a finite random set of points drawn from the interior. The estimator is based both on a Parzen-Rosenblatt kernel and extreme values of point processes. We give conditions for various kinds of convergence and asymptotic normality. We propose a method of reducing the negative bias and edge effects, illustrated by a simulation.

  1. Benchmarking NWP Kernels on Multi- and Many-core Processors

    Science.gov (United States)

    Michalakes, J.; Vachharajani, M.

    2008-12-01

    Increased computing power for weather, climate, and atmospheric science has provided direct benefits for defense, agriculture, the economy, the environment, and public welfare and convenience. Today, very large clusters with many thousands of processors are allowing scientists to move forward with simulations of unprecedented size. But time-critical applications such as real-time forecasting or climate prediction need strong scaling: faster nodes and processors, not more of them. Moreover, the need for good cost- performance has never been greater, both in terms of performance per watt and per dollar. For these reasons, the new generations of multi- and many-core processors being mass produced for commercial IT and "graphical computing" (video games) are being scrutinized for their ability to exploit the abundant fine- grain parallelism in atmospheric models. We present results of our work to date identifying key computational kernels within the dynamics and physics of a large community NWP model, the Weather Research and Forecast (WRF) model. We benchmark and optimize these kernels on several different multi- and many-core processors. The goals are to (1) characterize and model performance of the kernels in terms of computational intensity, data parallelism, memory bandwidth pressure, memory footprint, etc. (2) enumerate and classify effective strategies for coding and optimizing for these new processors, (3) assess difficulties and opportunities for tool or higher-level language support, and (4) establish a continuing set of kernel benchmarks that can be used to measure and compare effectiveness of current and future designs of multi- and many-core processors for weather and climate applications.

  2. Deproteinated palm kernel cake-derived oligosaccharides: A preliminary study

    Science.gov (United States)

    Fan, Suet Pin; Chia, Chin Hua; Fang, Zhen; Zakaria, Sarani; Chee, Kah Leong

    2014-09-01

    Preliminary study on microwave-assisted hydrolysis of deproteinated palm kernel cake (DPKC) to produce oligosaccharides using succinic acid was performed. Three important factors, i.e., temperature, acid concentration and reaction time, were selected to carry out the hydrolysis processes. Results showed that the highest yield of DPKC-derived oligosaccharides can be obtained at a parameter 170 °C, 0.2 N SA and 20 min of reaction time.

  3. Reproducing Kernel Method for Fractional Riccati Differential Equations

    Directory of Open Access Journals (Sweden)

    X. Y. Li

    2014-01-01

    Full Text Available This paper is devoted to a new numerical method for fractional Riccati differential equations. The method combines the reproducing kernel method and the quasilinearization technique. Its main advantage is that it can produce good approximations in a larger interval, rather than a local vicinity of the initial position. Numerical results are compared with some existing methods to show the accuracy and effectiveness of the present method.

  4. Landslide: Systematic Dynamic Race Detection in Kernel Space

    Science.gov (United States)

    2012-05-01

    provided the most detailed criticism, many of my friends helped me polish my work: Wolfgang Richter, Carlo Angiuli, Joshua Wise, Ryan Pearl , and... Fritz , which is a stress testing wrapper around a suite of test programs. Some of these tests are Landslide-friendly (Section 6.4.1), and some are...themselves stress tests. The 15-410 course staff also hand-grade every kernel, in part because Fritz is known for only catching race conditions at random

  5. An Investigation of Kernel Data Attacks and Countermeasures

    Science.gov (United States)

    2017-02-14

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) 8. PERFORMING ORGANIZATION University of Delaware REPORT NUMBER 210 Hullihen...area code) 757-813-4390 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std . Z39.16 Final Technical Report for "An Investigation of Kernel Data...Attacks and Countermeasures" Award No: N00014-15-l-2136 Haining Wang Department of Electrical and Computer Engineering University of Delaware

  6. Cassane diterpenes from the seed kernels of Caesalpinia sappan.

    Science.gov (United States)

    Nguyen, Hai Xuan; Nguyen, Nhan Trung; Dang, Phu Hoang; Thi Ho, Phuoc; Nguyen, Mai Thanh Thi; Van Can, Mao; Dibwe, Dya Fita; Ueda, Jun-Ya; Awale, Suresh

    2016-02-01

    Eight structurally diverse cassane diterpenes named tomocins A-H were isolated from the seed kernels of Vietnamese Caesalpinia sappan Linn. Their structures were determined by extensive NMR and CD spectroscopic analysis. Among the isolated compounds, tomocin A, phanginin A, F, and H exhibited mild preferential cytotoxicity against PANC-1 human pancreatic cancer cells under nutrition-deprived condition without causing toxicity in normal nutrient-rich conditions.

  7. Universal Approximation of Markov Kernels by Shallow Stochastic Feedforward Networks

    OpenAIRE

    Montufar, Guido

    2015-01-01

    We establish upper bounds for the minimal number of hidden units for which a binary stochastic feedforward network with sigmoid activation probabilities and a single hidden layer is a universal approximator of Markov kernels. We show that each possible probabilistic assignment of the states of $n$ output units, given the states of $k\\geq1$ input units, can be approximated arbitrarily well by a network with $2^{k-1}(2^{n-1}-1)$ hidden units.

  8. Heat-kernel coefficients for oblique boundary conditions

    CERN Document Server

    Dowker, John S; Kirsten, Klaus

    1997-01-01

    We calculate the heat-kernel coefficients, up to $a_2$, for a U(1) bundle on the 4-Ball for boundary conditions which are such that the normal derivative of the field at the boundary is related to a first-order operator in boundary derivatives acting on the field. The results are used to place restrictions on the general forms of the coefficients. In the specific case considered, there can be a breakdown of ellipticity.

  9. Harmonic analysis with respect to heat kernel measure

    CERN Document Server

    Hall, B C

    2000-01-01

    I review certain results in harmonic analysis for systems whose configuration space is a compact Lie group. The results described involve a heat kernel measure, which plays the same role as a Gaussian measure on Euclidean space. The main constructions are group analogs of the Hermite expansion, the Segal-Bargmann transform, and the Taylor expansion. The results are related to geometric quantization, to stochastic analysis, and to the quantization of 1+1-dimensional Yang-Mills theory.

  10. Learning the Parameters of Determinantal Point Process Kernels

    OpenAIRE

    Affandi, Raja Hafiz; Fox, Emily; Adams, Ryan Prescott; Taskar, Ben

    2014-01-01

    Determinantal point processes (DPPs) are well-suited for modeling repulsion and have proven useful in many applications where diversity is desired. While DPPs have many appealing properties, such as efficient sampling, learning the parameters of a DPP is still considered a difficult problem due to the non-convex nature of the likelihood function. In this paper, we propose using Bayesian methods to learn the DPP kernel parameters. These methods are applicable in large-scale and continuous DPP ...

  11. New classes of domains with explicit Bergman kernel

    Institute of Scientific and Technical Information of China (English)

    YIN Weiping; LU Keping; Roos GUY

    2004-01-01

    We introduce two classes of egg type domains, built on general boundedsymmetric domains, for which we obtain the Bergman kernel inexplicit formulas. As an auxiliary tool, we compute the integralof complex powers of the generic norm on a bounded symmetricdomains using the well-known integral of Selberg. Thisgeneralizes matrix integrals of Hua and leads to a specialpolynomial with integer or half-integer coefficients attached toeach irreducible bounded symmetric domain.

  12. A Comparative Study of Kernel and Robust Canonical Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Ashad M. Alam

    2010-02-01

    Full Text Available A number of measures of canonical correlation coefficient are now used in multimedia related fields like object recognition, image segmentation facial expression recognition and pattern recognition in the different literature. Some robust forms of classical canonical correlation coefficient are introduced recently to address the robustness issue of the canonical coefficient in the presence of outliers and departure from normality. Also a few number of kernels are used in canonical analysis to capture nonlinear relationship in data space, which is linear in some higher dimensional feature space. But not much work has been done to investigate their relative performances through i simulation from the view point of sensitivity, breakdown analysis as well as ii using real data sets. In this paper an attempt has been made to compare performances of kernel canonical correlation coefficients (Gaussian function, Laplacian function and Polynomial function with that of robust and classical canonical correlation coefficient measures using simulation with five sample sizes (50, 500, 1000, 1500 and 2000, influence function, breakdown point along with several real data and a multi-modal data sets, focusing on the specific case of segmented images with associated text. We investigate the bias, mean square error(MISE, qualitative robustness index (RI, sensitivity curve of each estimator under a variety of situations and also employ box plots and scatter plots of canonical variates to judge their performances. We have observed that the class of kernel estimators perform better than the class of classical and robust estimators in general and the kernel estimator with Laplacian function has shown the best performance for large sample size and break down is high in case of nonlinear data.

  13. Discriminating between HuR and TTP binding sites using the k-spectrum kernel method

    Science.gov (United States)

    Goldberg, Debra S.; Dowell, Robin

    2017-01-01

    Background The RNA binding proteins (RBPs) human antigen R (HuR) and Tristetraprolin (TTP) are known to exhibit competitive binding but have opposing effects on the bound messenger RNA (mRNA). How cells discriminate between the two proteins is an interesting problem. Machine learning approaches, such as support vector machines (SVMs), may be useful in the identification of discriminative features. However, this method has yet to be applied to studies of RNA binding protein motifs. Results Applying the k-spectrum kernel to a support vector machine (SVM), we first verified the published binding sites of both HuR and TTP. Additional feature engineering highlighted the U-rich binding preference of HuR and AU-rich binding preference for TTP. Domain adaptation along with multi-task learning was used to predict the common binding sites. Conclusion The distinction between HuR and TTP binding appears to be subtle content features. HuR prefers strongly U-rich sequences whereas TTP prefers AU-rich as with increasing A content, the sequences are more likely to be bound only by TTP. Our model is consistent with competitive binding of the two proteins, particularly at intermediate AU-balanced sequences. This suggests that fine changes in the A/U balance within a untranslated region (UTR) can alter the binding and subsequent stability of the message. Both feature engineering and domain adaptation emphasized the extent to which these proteins recognize similar general sequence features. This work suggests that the k-spectrum kernel method could be useful when studying RNA binding proteins and domain adaptation techniques such as feature augmentation could be employed particularly when examining RBPs with similar binding preferences. PMID:28333956

  14. KERNEL WORDS AND GAP SEQUENCE OF THE TRIBONACCI SEQUENCE

    Institute of Scientific and Technical Information of China (English)

    Yuke HUANG; Zhiying WEN

    2016-01-01

    In this paper, we investigate the factor properties and gap sequence of the Tri-bonacci sequence, the fixed point of the substitution σ(a, b, c) = (ab, ac, a). Let ωp be the p-th occurrence of ω and Gp(ω) be the gap between ωp and ωp+1. We introduce a notion of kernel for each factor ω, and then give the decomposition of the factor ω with respect to its kernel. Using the kernel and the decomposition, we prove the main result of this paper:for each factorω, the gap sequence{Gp(ω)}p≥1 is the Tribonacci sequence over the alphabet{G1(ω), G2(ω), G4(ω)}, and the expressions of gaps are determined completely. As an appli-cation, for each factorω and p∈N, we determine the position ofωp. Finally we introduce a notion of spectrum for studying some typical combinatorial properties, such as power, overlap and separate of factors.

  15. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun

    2014-04-21

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.

  16. Multiple Kernel Learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of non-negative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation, which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.

  17. KNBD: A Remote Kernel Block Server for Linux

    Science.gov (United States)

    Becker, Jeff

    1999-01-01

    I am developing a prototype of a Linux remote disk block server whose purpose is to serve as a lower level component of a parallel file system. Parallel file systems are an important component of high performance supercomputers and clusters. Although supercomputer vendors such as SGI and IBM have their own custom solutions, there has been a void and hence a demand for such a system on Beowulf-type PC Clusters. Recently, the Parallel Virtual File System (PVFS) project at Clemson University has begun to address this need (1). Although their system provides much of the functionality of (and indeed was inspired by) the equivalent file systems in the commercial supercomputer market, their system is all in user-space. Migrating their 10 services to the kernel could provide a performance boost, by obviating the need for expensive system calls. Thanks to Pavel Machek, the Linux kernel has provided the network block device (2) with kernels 2.1.101 and later. You can configure this block device to redirect reads and writes to a remote machine's disk. This can be used as a building block for constructing a striped file system across several nodes.

  18. Learning Discriminative Stein Kernel for SPD Matrices and Its Applications.

    Science.gov (United States)

    Zhang, Jianjia; Wang, Lei; Zhou, Luping; Li, Wanqing

    2016-05-01

    Stein kernel (SK) has recently shown promising performance on classifying images represented by symmetric positive definite (SPD) matrices. It evaluates the similarity between two SPD matrices through their eigenvalues. In this paper, we argue that directly using the original eigenvalues may be problematic because: 1) eigenvalue estimation becomes biased when the number of samples is inadequate, which may lead to unreliable kernel evaluation, and 2) more importantly, eigenvalues reflect only the property of an individual SPD matrix. They are not necessarily optimal for computing SK when the goal is to discriminate different classes of SPD matrices. To address the two issues, we propose a discriminative SK (DSK), in which an extra parameter vector is defined to adjust the eigenvalues of input SPD matrices. The optimal parameter values are sought by optimizing a proxy of classification performance. To show the generality of the proposed method, three kernel learning criteria that are commonly used in the literature are employed as a proxy. A comprehensive experimental study is conducted on a variety of image classification tasks to compare the proposed DSK with the original SK and other methods for evaluating the similarity between SPD matrices. The results demonstrate that the DSK can attain greater discrimination and better align with classification tasks by altering the eigenvalues. This makes it produce higher classification performance than the original SK and other commonly used methods.

  19. Reduced-size kernel models for nonlinear hybrid system identification.

    Science.gov (United States)

    Le, Van Luong; Bloch, Grard; Lauer, Fabien

    2011-12-01

    This brief paper focuses on the identification of nonlinear hybrid dynamical systems, i.e., systems switching between multiple nonlinear dynamical behaviors. Thus the aim is to learn an ensemble of submodels from a single set of input-output data in a regression setting with no prior knowledge on the grouping of the data points into similar behaviors. To be able to approximate arbitrary nonlinearities, kernel submodels are considered. However, in order to maintain efficiency when applying the method to large data sets, a preprocessing step is required in order to fix the submodel sizes and limit the number of optimization variables. This brief paper proposes four approaches, respectively inspired by the fixed-size least-squares support vector machines, the feature vector selection method, the kernel principal component regression and a modification of the latter, in order to deal with this issue and build sparse kernel submodels. These are compared in numerical experiments, which show that the proposed approach achieves the simultaneous classification of data points and approximation of the nonlinear behaviors in an efficient and accurate manner.

  20. Relationship Between Support Vector Set and Kernel Functions in SVM

    Institute of Scientific and Technical Information of China (English)

    张铃; 张钹

    2002-01-01

    Based on a constructive learning approach, covering algorithms, we investigatethe relationship between support vector sets and kernel functions in support vector machines(SVM). An interesting result is obtained. That is, in the linearly non-separable case, any sampleof a given sample set K can become a support vector under a certain kernel function. The resultshows that when the sample set K is linearly non-separable, although the chosen kernel functionsatisfies Mercer's condition its corresponding support vector set is not necessarily the subsetof K that plays a crucial role in classifying K. For a given sample set, what is the subsetthat plays the crucial role in classification? In order to explore the problem, a new concept,boundary or boundary points, is defined and its properties are discussed. Given a sample setK, we show that the decision functions for classifying the boundary points of K are the sameas that for classifying the K itself. And the boundary points of K only depend on K and thestructure of the space at which K is located and independent of the chosen approach for findingthe boundary. Therefore, the boundary point set may become the subset of K that plays acrucial role in classification. These results are of importance to understand the principle of thesupport vector machine (SVM) and to develop new learning algorithms.