WorldWideScience

Sample records for reproducing kernel methods

  1. Reproducing Kernel Method for Solving Nonlinear Differential-Difference Equations

    Directory of Open Access Journals (Sweden)

    Reza Mokhtari

    2012-01-01

    Full Text Available On the basis of reproducing kernel Hilbert spaces theory, an iterative algorithm for solving some nonlinear differential-difference equations (NDDEs is presented. The analytical solution is shown in a series form in a reproducing kernel space, and the approximate solution , is constructed by truncating the series to terms. The convergence of , to the analytical solution is also proved. Results obtained by the proposed method imply that it can be considered as a simple and accurate method for solving such differential-difference problems.

  2. Aveiro method in reproducing kernel Hilbert spaces under complete dictionary

    Science.gov (United States)

    Mai, Weixiong; Qian, Tao

    2017-12-01

    Aveiro Method is a sparse representation method in reproducing kernel Hilbert spaces (RKHS) that gives orthogonal projections in linear combinations of reproducing kernels over uniqueness sets. It, however, suffers from determination of uniqueness sets in the underlying RKHS. In fact, in general spaces, uniqueness sets are not easy to be identified, let alone the convergence speed aspect with Aveiro Method. To avoid those difficulties we propose an anew Aveiro Method based on a dictionary and the matching pursuit idea. What we do, in fact, are more: The new Aveiro method will be in relation to the recently proposed, the so called Pre-Orthogonal Greedy Algorithm (P-OGA) involving completion of a given dictionary. The new method is called Aveiro Method Under Complete Dictionary (AMUCD). The complete dictionary consists of all directional derivatives of the underlying reproducing kernels. We show that, under the boundary vanishing condition, bring available for the classical Hardy and Paley-Wiener spaces, the complete dictionary enables an efficient expansion of any given element in the Hilbert space. The proposed method reveals new and advanced aspects in both the Aveiro Method and the greedy algorithm.

  3. Reproducing kernel method with Taylor expansion for linear Volterra integro-differential equations

    Directory of Open Access Journals (Sweden)

    Azizallah Alvandi

    2017-06-01

    Full Text Available This research aims of the present a new and single algorithm for linear integro-differential equations (LIDE. To apply the reproducing Hilbert kernel method, there is made an equivalent transformation by using Taylor series for solving LIDEs. Shown in series form is the analytical solution in the reproducing kernel space and the approximate solution $ u_{N} $ is constructed by truncating the series to $ N $ terms. It is easy to prove the convergence of $ u_{N} $ to the analytical solution. The numerical solutions from the proposed method indicate that this approach can be implemented easily which shows attractive features.

  4. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  5. Application of Reproducing Kernel Method for Solving Nonlinear Fredholm-Volterra Integrodifferential Equations

    Directory of Open Access Journals (Sweden)

    Omar Abu Arqub

    2012-01-01

    Full Text Available This paper investigates the numerical solution of nonlinear Fredholm-Volterra integro-differential equations using reproducing kernel Hilbert space method. The solution ( is represented in the form of series in the reproducing kernel space. In the mean time, the n-term approximate solution ( is obtained and it is proved to converge to the exact solution (. Furthermore, the proposed method has an advantage that it is possible to pick any point in the interval of integration and as well the approximate solution and its derivative will be applicable. Numerical examples are included to demonstrate the accuracy and applicability of the presented technique. The results reveal that the method is very effective and simple.

  6. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  7. On the solutions of electrohydrodynamic flow with fractional differential equations by reproducing kernel method

    Directory of Open Access Journals (Sweden)

    Akgül Ali

    2016-01-01

    Full Text Available In this manuscript we investigate electrodynamic flow. For several values of the intimate parameters we proved that the approximate solution depends on a reproducing kernel model. Obtained results prove that the reproducing kernel method (RKM is very effective. We obtain good results without any transformation or discretization. Numerical experiments on test examples show that our proposed schemes are of high accuracy and strongly support the theoretical results.

  8. Enriched reproducing kernel particle method for fractional advection-diffusion equation

    Science.gov (United States)

    Ying, Yuping; Lian, Yanping; Tang, Shaoqiang; Liu, Wing Kam

    2018-06-01

    The reproducing kernel particle method (RKPM) has been efficiently applied to problems with large deformations, high gradients and high modal density. In this paper, it is extended to solve a nonlocal problem modeled by a fractional advection-diffusion equation (FADE), which exhibits a boundary layer with low regularity. We formulate this method on a moving least-square approach. Via the enrichment of fractional-order power functions to the traditional integer-order basis for RKPM, leading terms of the solution to the FADE can be exactly reproduced, which guarantees a good approximation to the boundary layer. Numerical tests are performed to verify the proposed approach.

  9. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    Science.gov (United States)

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  10. A Novel Approach to Calculation of Reproducing Kernel on Infinite Interval and Applications to Boundary Value Problems

    Directory of Open Access Journals (Sweden)

    Jing Niu

    2013-01-01

    reproducing kernel on infinite interval is obtained concisely in polynomial form for the first time. Furthermore, as a particular effective application of this method, we give an explicit representation formula for calculation of reproducing kernel in reproducing kernel space with boundary value conditions.

  11. Reproducing Kernels and Coherent States on Julia Sets

    Energy Technology Data Exchange (ETDEWEB)

    Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr

    2007-11-15

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.

  12. Reproducing Kernels and Coherent States on Julia Sets

    International Nuclear Information System (INIS)

    Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.

    2007-01-01

    We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems

  13. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...

  14. Reproducing kernel Hilbert spaces of Gaussian priors

    NARCIS (Netherlands)

    Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.

    2008-01-01

    We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described

  15. An iterative kernel based method for fourth order nonlinear equation with nonlinear boundary condition

    Science.gov (United States)

    Azarnavid, Babak; Parand, Kourosh; Abbasbandy, Saeid

    2018-06-01

    This article discusses an iterative reproducing kernel method with respect to its effectiveness and capability of solving a fourth-order boundary value problem with nonlinear boundary conditions modeling beams on elastic foundations. Since there is no method of obtaining reproducing kernel which satisfies nonlinear boundary conditions, the standard reproducing kernel methods cannot be used directly to solve boundary value problems with nonlinear boundary conditions as there is no knowledge about the existence and uniqueness of the solution. The aim of this paper is, therefore, to construct an iterative method by the use of a combination of reproducing kernel Hilbert space method and a shooting-like technique to solve the mentioned problems. Error estimation for reproducing kernel Hilbert space methods for nonlinear boundary value problems have yet to be discussed in the literature. In this paper, we present error estimation for the reproducing kernel method to solve nonlinear boundary value problems probably for the first time. Some numerical results are given out to demonstrate the applicability of the method.

  16. Numerical method in reproducing kernel space for an inverse source problem for the fractional diffusion equation

    International Nuclear Information System (INIS)

    Wang, Wenyan; Han, Bo; Yamamoto, Masahiro

    2013-01-01

    We propose a new numerical method for reproducing kernel Hilbert space to solve an inverse source problem for a two-dimensional fractional diffusion equation, where we are required to determine an x-dependent function in a source term by data at the final time. The exact solution is represented in the form of a series and the approximation solution is obtained by truncating the series. Furthermore, a technique is proposed to improve some of the existing methods. We prove that the numerical method is convergent under an a priori assumption of the regularity of solutions. The method is simple to implement. Our numerical result shows that our method is effective and that it is robust against noise in L 2 -space in reconstructing a source function. (paper)

  17. On weights which admit the reproducing kernel of Bergman type

    Directory of Open Access Journals (Sweden)

    Zbigniew Pasternak-Winiarski

    1992-01-01

    Full Text Available In this paper we consider (1 the weights of integration for which the reproducing kernel of the Bergman type can be defined, i.e., the admissible weights, and (2 the kernels defined by such weights. It is verified that the weighted Bergman kernel has the analogous properties as the classical one. We prove several sufficient conditions and necessary and sufficient conditions for a weight to be an admissible weight. We give also an example of a weight which is not of this class. As a positive example we consider the weight μ(z=(Imz2 defined on the unit disk in ℂ.

  18. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  19. INFORMATIVE ENERGY METRIC FOR SIMILARITY MEASURE IN REPRODUCING KERNEL HILBERT SPACES

    Directory of Open Access Journals (Sweden)

    Songhua Liu

    2012-02-01

    Full Text Available In this paper, information energy metric (IEM is obtained by similarity computing for high-dimensional samples in a reproducing kernel Hilbert space (RKHS. Firstly, similar/dissimilar subsets and their corresponding informative energy functions are defined. Secondly, IEM is proposed for similarity measure of those subsets, which converts the non-metric distances into metric ones. Finally, applications of this metric is introduced, such as classification problems. Experimental results validate the effectiveness of the proposed method.

  20. Soft and hard classification by reproducing kernel Hilbert space methods.

    Science.gov (United States)

    Wahba, Grace

    2002-12-24

    Reproducing kernel Hilbert space (RKHS) methods provide a unified context for solving a wide variety of statistical modelling and function estimation problems. We consider two such problems: We are given a training set [yi, ti, i = 1, em leader, n], where yi is the response for the ith subject, and ti is a vector of attributes for this subject. The value of y(i) is a label that indicates which category it came from. For the first problem, we wish to build a model from the training set that assigns to each t in an attribute domain of interest an estimate of the probability pj(t) that a (future) subject with attribute vector t is in category j. The second problem is in some sense less ambitious; it is to build a model that assigns to each t a label, which classifies a future subject with that t into one of the categories or possibly "none of the above." The approach to the first of these two problems discussed here is a special case of what is known as penalized likelihood estimation. The approach to the second problem is known as the support vector machine. We also note some alternate but closely related approaches to the second problem. These approaches are all obtained as solutions to optimization problems in RKHS. Many other problems, in particular the solution of ill-posed inverse problems, can be obtained as solutions to optimization problems in RKHS and are mentioned in passing. We caution the reader that although a large literature exists in all of these topics, in this inaugural article we are selectively highlighting work of the author, former students, and other collaborators.

  1. Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    DEFF Research Database (Denmark)

    Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.

    2013-01-01

    correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...

  2. Adaptive Learning in Cartesian Product of Reproducing Kernel Hilbert Spaces

    OpenAIRE

    Yukawa, Masahiro

    2014-01-01

    We propose a novel adaptive learning algorithm based on iterative orthogonal projections in the Cartesian product of multiple reproducing kernel Hilbert spaces (RKHSs). The task is estimating/tracking nonlinear functions which are supposed to contain multiple components such as (i) linear and nonlinear components, (ii) high- and low- frequency components etc. In this case, the use of multiple RKHSs permits a compact representation of multicomponent functions. The proposed algorithm is where t...

  3. Improved dynamical scaling analysis using the kernel method for nonequilibrium relaxation.

    Science.gov (United States)

    Echinaka, Yuki; Ozeki, Yukiyasu

    2016-10-01

    The dynamical scaling analysis for the Kosterlitz-Thouless transition in the nonequilibrium relaxation method is improved by the use of Bayesian statistics and the kernel method. This allows data to be fitted to a scaling function without using any parametric model function, which makes the results more reliable and reproducible and enables automatic and faster parameter estimation. Applying this method, the bootstrap method is introduced and a numerical discrimination for the transition type is proposed.

  4. Parallel magnetic resonance imaging as approximation in a reproducing kernel Hilbert space

    International Nuclear Information System (INIS)

    Athalye, Vivek; Lustig, Michael; Martin Uecker

    2015-01-01

    In magnetic resonance imaging data samples are collected in the spatial frequency domain (k-space), typically by time-consuming line-by-line scanning on a Cartesian grid. Scans can be accelerated by simultaneous acquisition of data using multiple receivers (parallel imaging), and by using more efficient non-Cartesian sampling schemes. To understand and design k-space sampling patterns, a theoretical framework is needed to analyze how well arbitrary sampling patterns reconstruct unsampled k-space using receive coil information. As shown here, reconstruction from samples at arbitrary locations can be understood as approximation of vector-valued functions from the acquired samples and formulated using a reproducing kernel Hilbert space with a matrix-valued kernel defined by the spatial sensitivities of the receive coils. This establishes a formal connection between approximation theory and parallel imaging. Theoretical tools from approximation theory can then be used to understand reconstruction in k-space and to extend the analysis of the effects of samples selection beyond the traditional image-domain g-factor noise analysis to both noise amplification and approximation errors in k-space. This is demonstrated with numerical examples. (paper)

  5. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    Science.gov (United States)

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  6. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  7. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  8. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  9. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  10. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  11. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  12. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  13. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  14. On-line quantile regression in the RKHS (Reproducing Kernel Hilbert Space) for operational probabilistic forecasting of wind power

    International Nuclear Information System (INIS)

    Gallego-Castillo, Cristobal; Bessa, Ricardo; Cavalcante, Laura; Lopez-Garcia, Oscar

    2016-01-01

    Wind power probabilistic forecast is being used as input in several decision-making problems, such as stochastic unit commitment, operating reserve setting and electricity market bidding. This work introduces a new on-line quantile regression model based on the Reproducing Kernel Hilbert Space (RKHS) framework. Its application to the field of wind power forecasting involves a discussion on the choice of the bias term of the quantile models, and the consideration of the operational framework in order to mimic real conditions. Benchmark against linear and splines quantile regression models was performed for a real case study during a 18 months period. Model parameter selection was based on k-fold crossvalidation. Results showed a noticeable improvement in terms of calibration, a key criterion for the wind power industry. Modest improvements in terms of Continuous Ranked Probability Score (CRPS) were also observed for prediction horizons between 6 and 20 h ahead. - Highlights: • New online quantile regression model based on the Reproducing Kernel Hilbert Space. • First application to operational probabilistic wind power forecasting. • Modest improvements of CRPS for prediction horizons between 6 and 20 h ahead. • Noticeable improvements in terms of Calibration due to online learning.

  15. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space

    Directory of Open Access Journals (Sweden)

    Kan Li

    2018-04-01

    Full Text Available This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM speech processing as well as neuromorphic implementations based on spiking neural network (SNN, yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR regime.

  16. Biologically-Inspired Spike-Based Automatic Speech Recognition of Isolated Digits Over a Reproducing Kernel Hilbert Space.

    Science.gov (United States)

    Li, Kan; Príncipe, José C

    2018-01-01

    This paper presents a novel real-time dynamic framework for quantifying time-series structure in spoken words using spikes. Audio signals are converted into multi-channel spike trains using a biologically-inspired leaky integrate-and-fire (LIF) spike generator. These spike trains are mapped into a function space of infinite dimension, i.e., a Reproducing Kernel Hilbert Space (RKHS) using point-process kernels, where a state-space model learns the dynamics of the multidimensional spike input using gradient descent learning. This kernelized recurrent system is very parsimonious and achieves the necessary memory depth via feedback of its internal states when trained discriminatively, utilizing the full context of the phoneme sequence. A main advantage of modeling nonlinear dynamics using state-space trajectories in the RKHS is that it imposes no restriction on the relationship between the exogenous input and its internal state. We are free to choose the input representation with an appropriate kernel, and changing the kernel does not impact the system nor the learning algorithm. Moreover, we show that this novel framework can outperform both traditional hidden Markov model (HMM) speech processing as well as neuromorphic implementations based on spiking neural network (SNN), yielding accurate and ultra-low power word spotters. As a proof of concept, we demonstrate its capabilities using the benchmark TI-46 digit corpus for isolated-word automatic speech recognition (ASR) or keyword spotting. Compared to HMM using Mel-frequency cepstral coefficient (MFCC) front-end without time-derivatives, our MFCC-KAARMA offered improved performance. For spike-train front-end, spike-KAARMA also outperformed state-of-the-art SNN solutions. Furthermore, compared to MFCCs, spike trains provided enhanced noise robustness in certain low signal-to-noise ratio (SNR) regime.

  17. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  18. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  19. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  20. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  1. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  2. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  3. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  4. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  5. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  6. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  7. Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael

    We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...... image of a page in the book 'hauksbok', which contains Icelandic sagas....

  8. On methods to increase the security of the Linux kernel

    International Nuclear Information System (INIS)

    Matvejchikov, I.V.

    2014-01-01

    Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru

  9. A One-Sample Test for Normality with Kernel Methods

    OpenAIRE

    Kellner , Jérémie; Celisse , Alain

    2015-01-01

    We propose a new one-sample test for normality in a Reproducing Kernel Hilbert Space (RKHS). Namely, we test the null-hypothesis of belonging to a given family of Gaussian distributions. Hence our procedure may be applied either to test data for normality or to test parameters (mean and covariance) if data are assumed Gaussian. Our test is based on the same principle as the MMD (Maximum Mean Discrepancy) which is usually used for two-sample tests such as homogeneity or independence testing. O...

  10. Comparison of Kernel Equating and Item Response Theory Equating Methods

    Science.gov (United States)

    Meng, Yu

    2012-01-01

    The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…

  11. Kernel Methods for Machine Learning with Life Science Applications

    DEFF Research Database (Denmark)

    Abrahamsen, Trine Julie

    Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...

  12. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  13. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  14. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    In unsupervised classification, kernel -means clustering method has been shown to perform better than conventional -means clustering method in ... 518501, India; Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur College of Engineering, Anantapur 515002, India ...

  15. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  16. A Fourier-series-based kernel-independent fast multipole method

    International Nuclear Information System (INIS)

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-01-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  17. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    Science.gov (United States)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-01-01

    Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120

  18. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  19. Dose calculation methods in photon beam therapy using energy deposition kernels

    International Nuclear Information System (INIS)

    Ahnesjoe, A.

    1991-01-01

    The problem of calculating accurate dose distributions in treatment planning of megavoltage photon radiation therapy has been studied. New dose calculation algorithms using energy deposition kernels have been developed. The kernels describe the transfer of energy by secondary particles from a primary photon interaction site to its surroundings. Monte Carlo simulations of particle transport have been used for derivation of kernels for primary photon energies form 0.1 MeV to 50 MeV. The trade off between accuracy and calculational speed has been addressed by the development of two algorithms; one point oriented with low computional overhead for interactive use and one for fast and accurate calculation of dose distributions in a 3-dimensional lattice. The latter algorithm models secondary particle transport in heterogeneous tissue by scaling energy deposition kernels with the electron density of the tissue. The accuracy of the methods has been tested using full Monte Carlo simulations for different geometries, and found to be superior to conventional algorithms based on scaling of broad beam dose distributions. Methods have also been developed for characterization of clinical photon beams in entities appropriate for kernel based calculation models. By approximating the spectrum as laterally invariant, an effective spectrum and dose distribution for contaminating charge particles are derived form depth dose distributions measured in water, using analytical constraints. The spectrum is used to calculate kernels by superposition of monoenergetic kernels. The lateral energy fluence distribution is determined by deconvolving measured lateral dose distributions by a corresponding pencil beam kernel. Dose distributions for contaminating photons are described using two different methods, one for estimation of the dose outside of the collimated beam, and the other for calibration of output factors derived from kernel based dose calculations. (au)

  20. Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method

    Science.gov (United States)

    Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao

    2017-03-01

    Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.

  1. A Comparative Study of Pairwise Learning Methods Based on Kernel Ridge Regression.

    Science.gov (United States)

    Stock, Michiel; Pahikkala, Tapio; Airola, Antti; De Baets, Bernard; Waegeman, Willem

    2018-06-12

    Many machine learning problems can be formulated as predicting labels for a pair of objects. Problems of that kind are often referred to as pairwise learning, dyadic prediction, or network inference problems. During the past decade, kernel methods have played a dominant role in pairwise learning. They still obtain a state-of-the-art predictive performance, but a theoretical analysis of their behavior has been underexplored in the machine learning literature. In this work we review and unify kernel-based algorithms that are commonly used in different pairwise learning settings, ranging from matrix filtering to zero-shot learning. To this end, we focus on closed-form efficient instantiations of Kronecker kernel ridge regression. We show that independent task kernel ridge regression, two-step kernel ridge regression, and a linear matrix filter arise naturally as a special case of Kronecker kernel ridge regression, implying that all these methods implicitly minimize a squared loss. In addition, we analyze universality, consistency, and spectral filtering properties. Our theoretical results provide valuable insights into assessing the advantages and limitations of existing pairwise learning methods.

  2. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  3. Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition

    NARCIS (Netherlands)

    Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja

    2012-01-01

    We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an

  4. Method for calculating anisotropic neutron transport using scattering kernel without polynomial expansion

    International Nuclear Information System (INIS)

    Takahashi, Akito; Yamamoto, Junji; Ebisuya, Mituo; Sumita, Kenji

    1979-01-01

    A new method for calculating the anisotropic neutron transport is proposed for the angular spectral analysis of D-T fusion reactor neutronics. The method is based on the transport equation with new type of anisotropic scattering kernels formulated by a single function I sub(i) (μ', μ) instead of polynomial expansion, for instance, Legendre polynomials. In the calculation of angular flux spectra by using scattering kernels with the Legendre polynomial expansion, we often observe the oscillation with negative flux. But in principle this oscillation disappears by this new method. In this work, we discussed anisotropic scattering kernels of the elastic scattering and the inelastic scatterings which excite discrete energy levels. The other scatterings were included in isotropic scattering kernels. An approximation method, with use of the first collision source written by the I sub(i) (μ', μ) function, was introduced to attenuate the ''oscillations'' when we are obliged to use the scattering kernels with the Legendre polynomial expansion. Calculated results with this approximation showed remarkable improvement for the analysis of the angular flux spectra in a slab system of lithium metal with the D-T neutron source. (author)

  5. Epileptic Seizure Detection with Log-Euclidean Gaussian Kernel-Based Sparse Representation.

    Science.gov (United States)

    Yuan, Shasha; Zhou, Weidong; Wu, Qi; Zhang, Yanli

    2016-05-01

    Epileptic seizure detection plays an important role in the diagnosis of epilepsy and reducing the massive workload of reviewing electroencephalography (EEG) recordings. In this work, a novel algorithm is developed to detect seizures employing log-Euclidean Gaussian kernel-based sparse representation (SR) in long-term EEG recordings. Unlike the traditional SR for vector data in Euclidean space, the log-Euclidean Gaussian kernel-based SR framework is proposed for seizure detection in the space of the symmetric positive definite (SPD) matrices, which form a Riemannian manifold. Since the Riemannian manifold is nonlinear, the log-Euclidean Gaussian kernel function is applied to embed it into a reproducing kernel Hilbert space (RKHS) for performing SR. The EEG signals of all channels are divided into epochs and the SPD matrices representing EEG epochs are generated by covariance descriptors. Then, the testing samples are sparsely coded over the dictionary composed by training samples utilizing log-Euclidean Gaussian kernel-based SR. The classification of testing samples is achieved by computing the minimal reconstructed residuals. The proposed method is evaluated on the Freiburg EEG dataset of 21 patients and shows its notable performance on both epoch-based and event-based assessments. Moreover, this method handles multiple channels of EEG recordings synchronously which is more speedy and efficient than traditional seizure detection methods.

  6. Parametric output-only identification of time-varying structures using a kernel recursive extended least squares TARMA approach

    Science.gov (United States)

    Ma, Zhi-Sai; Liu, Li; Zhou, Si-Da; Yu, Lei; Naets, Frank; Heylen, Ward; Desmet, Wim

    2018-01-01

    The problem of parametric output-only identification of time-varying structures in a recursive manner is considered. A kernelized time-dependent autoregressive moving average (TARMA) model is proposed by expanding the time-varying model parameters onto the basis set of kernel functions in a reproducing kernel Hilbert space. An exponentially weighted kernel recursive extended least squares TARMA identification scheme is proposed, and a sliding-window technique is subsequently applied to fix the computational complexity for each consecutive update, allowing the method to operate online in time-varying environments. The proposed sliding-window exponentially weighted kernel recursive extended least squares TARMA method is employed for the identification of a laboratory time-varying structure consisting of a simply supported beam and a moving mass sliding on it. The proposed method is comparatively assessed against an existing recursive pseudo-linear regression TARMA method via Monte Carlo experiments and shown to be capable of accurately tracking the time-varying dynamics. Furthermore, the comparisons demonstrate the superior achievable accuracy, lower computational complexity and enhanced online identification capability of the proposed kernel recursive extended least squares TARMA approach.

  7. A comprehensive benchmark of kernel methods to extract protein-protein interactions from literature.

    Directory of Open Access Journals (Sweden)

    Domonkos Tikk

    Full Text Available The most important way of conveying new findings in biomedical research is scientific publication. Extraction of protein-protein interactions (PPIs reported in scientific publications is one of the core topics of text mining in the life sciences. Recently, a new class of such methods has been proposed - convolution kernels that identify PPIs using deep parses of sentences. However, comparing published results of different PPI extraction methods is impossible due to the use of different evaluation corpora, different evaluation metrics, different tuning procedures, etc. In this paper, we study whether the reported performance metrics are robust across different corpora and learning settings and whether the use of deep parsing actually leads to an increase in extraction quality. Our ultimate goal is to identify the one method that performs best in real-life scenarios, where information extraction is performed on unseen text and not on specifically prepared evaluation data. We performed a comprehensive benchmarking of nine different methods for PPI extraction that use convolution kernels on rich linguistic information. Methods were evaluated on five different public corpora using cross-validation, cross-learning, and cross-corpus evaluation. Our study confirms that kernels using dependency trees generally outperform kernels based on syntax trees. However, our study also shows that only the best kernel methods can compete with a simple rule-based approach when the evaluation prevents information leakage between training and test corpora. Our results further reveal that the F-score of many approaches drops significantly if no corpus-specific parameter optimization is applied and that methods reaching a good AUC score often perform much worse in terms of F-score. We conclude that for most kernels no sensible estimation of PPI extraction performance on new text is possible, given the current heterogeneity in evaluation data. Nevertheless, our study

  8. A Spectral-Texture Kernel-Based Classification Method for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-11-01

    Full Text Available Classification of hyperspectral images always suffers from high dimensionality and very limited labeled samples. Recently, the spectral-spatial classification has attracted considerable attention and can achieve higher classification accuracy and smoother classification maps. In this paper, a novel spectral-spatial classification method for hyperspectral images by using kernel methods is investigated. For a given hyperspectral image, the principle component analysis (PCA transform is first performed. Then, the first principle component of the input image is segmented into non-overlapping homogeneous regions by using the entropy rate superpixel (ERS algorithm. Next, the local spectral histogram model is applied to each homogeneous region to obtain the corresponding texture features. Because this step is performed within each homogenous region, instead of within a fixed-size image window, the obtained local texture features in the image are more accurate, which can effectively benefit the improvement of classification accuracy. In the following step, a contextual spectral-texture kernel is constructed by combining spectral information in the image and the extracted texture information using the linearity property of the kernel methods. Finally, the classification map is achieved by the support vector machines (SVM classifier using the proposed spectral-texture kernel. Experiments on two benchmark airborne hyperspectral datasets demonstrate that our method can effectively improve classification accuracies, even though only a very limited training sample is available. Specifically, our method can achieve from 8.26% to 15.1% higher in terms of overall accuracy than the traditional SVM classifier. The performance of our method was further compared to several state-of-the-art classification methods of hyperspectral images using objective quantitative measures and a visual qualitative evaluation.

  9. Employment of kernel methods on wind turbine power performance assessment

    DEFF Research Database (Denmark)

    Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.

    2015-01-01

    A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...

  10. Genomic similarity and kernel methods I: advancements by building on mathematical and statistical foundations.

    Science.gov (United States)

    Schaid, Daniel J

    2010-01-01

    Measures of genomic similarity are the basis of many statistical analytic methods. We review the mathematical and statistical basis of similarity methods, particularly based on kernel methods. A kernel function converts information for a pair of subjects to a quantitative value representing either similarity (larger values meaning more similar) or distance (smaller values meaning more similar), with the requirement that it must create a positive semidefinite matrix when applied to all pairs of subjects. This review emphasizes the wide range of statistical methods and software that can be used when similarity is based on kernel methods, such as nonparametric regression, linear mixed models and generalized linear mixed models, hierarchical models, score statistics, and support vector machines. The mathematical rigor for these methods is summarized, as is the mathematical framework for making kernels. This review provides a framework to move from intuitive and heuristic approaches to define genomic similarities to more rigorous methods that can take advantage of powerful statistical modeling and existing software. A companion paper reviews novel approaches to creating kernels that might be useful for genomic analyses, providing insights with examples [1]. Copyright © 2010 S. Karger AG, Basel.

  11. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  12. A kernel plus method for quantifying wind turbine performance upgrades

    KAUST Repository

    Lee, Giwhyun

    2014-04-21

    Power curves are commonly estimated using the binning method recommended by the International Electrotechnical Commission, which primarily incorporates wind speed information. When such power curves are used to quantify a turbine\\'s upgrade, the results may not be accurate because many other environmental factors in addition to wind speed, such as temperature, air pressure, turbulence intensity, wind shear and humidity, all potentially affect the turbine\\'s power output. Wind industry practitioners are aware of the need to filter out effects from environmental conditions. Toward that objective, we developed a kernel plus method that allows incorporation of multivariate environmental factors in a power curve model, thereby controlling the effects from environmental factors while comparing power outputs. We demonstrate that the kernel plus method can serve as a useful tool for quantifying a turbine\\'s upgrade because it is sensitive to small and moderate changes caused by certain turbine upgrades. Although we demonstrate the utility of the kernel plus method in this specific application, the resulting method is a general, multivariate model that can connect other physical factors, as long as their measurements are available, with a turbine\\'s power output, which may allow us to explore new physical properties associated with wind turbine performance. © 2014 John Wiley & Sons, Ltd.

  13. Development of nondestructive screening methods for single kernel characterization of wheat

    DEFF Research Database (Denmark)

    Nielsen, J.P.; Pedersen, D.K.; Munck, L.

    2003-01-01

    predictability. However, by applying an averaging approach, in which single seed replicate measurements are mathematically simulated, a very good NIT prediction model was achieved. This suggests that the single seed NIT spectra contain hardness information, but that a single seed hardness method with higher......The development of nondestructive screening methods for single seed protein, vitreousness, density, and hardness index has been studied for single kernels of European wheat. A single kernel procedure was applied involving, image analysis, near-infrared transmittance (NIT) spectroscopy, laboratory...

  14. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  15. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  16. Generic primal-dual interior point methods based on a new kernel function

    NARCIS (Netherlands)

    EL Ghami, M.; Roos, C.

    2008-01-01

    In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the

  17. On convergence of kernel learning estimators

    NARCIS (Netherlands)

    Norkin, V.I.; Keyzer, M.A.

    2009-01-01

    The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability

  18. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  19. CLAss-Specific Subspace Kernel Representations and Adaptive Margin Slack Minimization for Large Scale Classification.

    Science.gov (United States)

    Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan

    2018-02-01

    In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.

  20. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  1. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  2. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  3. Sensitivity kernels for viscoelastic loading based on adjoint methods

    Science.gov (United States)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  4. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  5. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  6. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  7. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  8. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    Science.gov (United States)

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  9. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  10. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  11. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  12. SU-E-T-209: Independent Dose Calculation in FFF Modulated Fields with Pencil Beam Kernels Obtained by Deconvolution

    International Nuclear Information System (INIS)

    Azcona, J; Burguete, J

    2014-01-01

    Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated

  13. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  14. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  15. An asymptotic expression for the eigenvalues of the normalization kernel of the resonating group method

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.; Brink, D.M.

    1976-01-01

    A generating function for the eigenvalues of the RGM Normalization Kernel is expressed in terms of the diagonal matrix elements of thw GCM Overlap Kernel. An asymptotic expression for the eigenvalues is obtained by using the Method of Steepest Descent. (Auth.)

  16. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  17. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  18. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  19. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  20. PKI, Gamma Radiation Reactor Shielding Calculation by Point-Kernel Method

    International Nuclear Information System (INIS)

    Li Chunhuai; Zhang Liwu; Zhang Yuqin; Zhang Chuanxu; Niu Xihua

    1990-01-01

    1 - Description of program or function: This code calculates radiation shielding problem of gamma-ray in geometric space. 2 - Method of solution: PKI uses a point kernel integration technique, describes radiation shielding geometric space by using geometric space configuration method and coordinate conversion, and makes use of calculation result of reactor primary shielding and flow regularity in loop system for coolant

  1. A unified and comprehensible view of parametric and kernel methods for genomic prediction with application to rice

    Directory of Open Access Journals (Sweden)

    Laval Jacquin

    2016-08-01

    Full Text Available One objective of this study was to provide readers with a clear and unified understanding ofparametric statistical and kernel methods, used for genomic prediction, and to compare some ofthese in the context of rice breeding for quantitative traits. Furthermore, another objective wasto provide a simple and user-friendly R package, named KRMM, which allows users to performRKHS regression with several kernels. After introducing the concept of regularized empiricalrisk minimization, the connections between well-known parametric and kernel methods suchas Ridge regression (i.e. genomic best linear unbiased predictor (GBLUP and reproducingkernel Hilbert space (RKHS regression were reviewed. Ridge regression was then reformulatedso as to show and emphasize the advantage of the kernel trick concept, exploited by kernelmethods in the context of epistatic genetic architectures, over parametric frameworks used byconventional methods. Some parametric and kernel methods; least absolute shrinkage andselection operator (LASSO, GBLUP, support vector machine regression (SVR and RKHSregression were thereupon compared for their genomic predictive ability in the context of ricebreeding using three real data sets. Among the compared methods, RKHS regression and SVRwere often the most accurate methods for prediction followed by GBLUP and LASSO. An Rfunction which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression,with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time hasbeen developed. Moreover, a modified version of this function, which allows users to tune kernelsfor RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  2. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  4. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  5. Scatter kernel estimation with an edge-spread function method for cone-beam computed tomography imaging

    International Nuclear Information System (INIS)

    Li Heng; Mohan, Radhe; Zhu, X Ronald

    2008-01-01

    The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.

  6. Kernel methods for large-scale genomic data analysis

    Science.gov (United States)

    Xing, Eric P.; Schaid, Daniel J.

    2015-01-01

    Machine learning, particularly kernel methods, has been demonstrated as a promising new tool to tackle the challenges imposed by today’s explosive data growth in genomics. They provide a practical and principled approach to learning how a large number of genetic variants are associated with complex phenotypes, to help reveal the complexity in the relationship between the genetic markers and the outcome of interest. In this review, we highlight the potential key role it will have in modern genomic data processing, especially with regard to integration with classical methods for gene prioritizing, prediction and data fusion. PMID:25053743

  7. Application of learning techniques based on kernel methods for the fault diagnosis in industrial processes

    Directory of Open Access Journals (Sweden)

    Jose M. Bernal-de-Lázaro

    2016-05-01

    Full Text Available This article summarizes the main contributions of the PhD thesis titled: "Application of learning techniques based on kernel methods for the fault diagnosis in Industrial processes". This thesis focuses on the analysis and design of fault diagnosis systems (DDF based on historical data. Specifically this thesis provides: (1 new criteria for adjustment of the kernel methods used to select features with a high discriminative capacity for the fault diagnosis tasks, (2 a proposed approach process monitoring using statistical techniques multivariate that incorporates a reinforced information concerning to the dynamics of the Hotelling's T2 and SPE statistics, whose combination with kernel methods improves the detection of small-magnitude faults; (3 an robustness index to compare the diagnosis classifiers performance taking into account their insensitivity to possible noise and disturbance on historical data.

  8. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  9. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  10. Kernel method for clustering based on optimal target vector

    International Nuclear Information System (INIS)

    Angelini, Leonardo; Marinazzo, Daniele; Pellicoro, Mario; Stramaglia, Sebastiano

    2006-01-01

    We introduce Ising models, suitable for dichotomic clustering, with couplings that are (i) both ferro- and anti-ferromagnetic (ii) depending on the whole data-set and not only on pairs of samples. Couplings are determined exploiting the notion of optimal target vector, here introduced, a link between kernel supervised and unsupervised learning. The effectiveness of the method is shown in the case of the well-known iris data-set and in benchmarks of gene expression levels, where it works better than existing methods for dichotomic clustering

  11. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  12. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  13. A high-order SPH method by introducing inverse kernels

    Directory of Open Access Journals (Sweden)

    Le Fang

    2017-02-01

    Full Text Available The smoothed particle hydrodynamics (SPH method is usually expected to be an efficient numerical tool for calculating the fluid-structure interactions in compressors; however, an endogenetic restriction is the problem of low-order consistency. A high-order SPH method by introducing inverse kernels, which is quite easy to be implemented but efficient, is proposed for solving this restriction. The basic inverse method and the special treatment near boundary are introduced with also the discussion of the combination of the Least-Square (LS and Moving-Least-Square (MLS methods. Then detailed analysis in spectral space is presented for people to better understand this method. Finally we show three test examples to verify the method behavior.

  14. Clinical lymph node staging-Influence of slice thickness and reconstruction kernel on volumetry and RECIST measurements

    Energy Technology Data Exchange (ETDEWEB)

    Fabel, M., E-mail: m.fabel@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Wulff, A., E-mail: a.wulff@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Heckel, F., E-mail: frank.heckel@mevis.fraunhofer.de [Fraunhofer MeVis, Universitaetsallee 29, 28359 Bremen (Germany); Bornemann, L., E-mail: lars.bornemann@mevis.fraunhofer.de [Fraunhofer MeVis, Universitaetsallee 29, 28359 Bremen (Germany); Freitag-Wolf, S., E-mail: freitag@medinfo.uni-kiel.de [Institute of Medical Informatics and Statistics, Brunswiker Strasse 10, 24105 Kiel (Germany); Heller, M., E-mail: martin.heller@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Biederer, J., E-mail: juergen.biederer@rad.uni-kiel.de [Department of Diagnostic Radiology, University Hospital Schleswig-Holstein, Campus Kiel, Arnold-Heller-Str. 3, Haus 23, D-24105 Kiel (Germany); Bolte, H., E-mail: hendrik.bolte@ukmuenster.de [Department of Nuclear Medicine, University Hospital Muenster, Albert-Schweitzer-Campus 1, Gebaeude A1, D-48149 Muenster (Germany)

    2012-11-15

    Purpose: Therapy response evaluation in oncological patient care requires reproducible and accurate image evaluation. Today, common standard in measurement of tumour growth or shrinkage is one-dimensional RECIST 1.1. A proposed alternative method for therapy monitoring is computer aided volumetric analysis. In lung metastases volumetry proved high reliability and accuracy in experimental studies. High reliability and accuracy of volumetry in lung metastases has been proven. However, other metastatic lesions such as enlarged lymph nodes are far more challenging. The aim of this study was to investigate the reproducibility of semi-automated volumetric analysis of lymph node metastases as a function of both slice thickness and reconstruction kernel. In addition, manual long axis diameters (LAD) as well as short axis diameters (SAD) were compared to automated RECIST measurements. Materials and methods: Multislice-CT of the chest, abdomen and pelvis of 15 patients with lymph node metastases of malignant melanoma were included. Raw data were reconstructed using different slice thicknesses (1-5 mm) and varying reconstruction kernels (B20f, B40f, B60f). Volume and RECIST measurements were performed for 85 lymph nodes between 10 and 60 mm using Oncology Prototype Software (Fraunhofer MEVIS, Siemens, Germany) and were compared to a defined reference volume and diameter by calculating absolute percentage errors (APE). Variability of the lymph node sizes was computed as relative measurement differences, precision of measurements was computed as relative measurement deviation. Results: Mean absolute percentage error (APE) for volumetric analysis varied between 3.95% and 13.8% and increased significantly with slice thickness. Differences between reconstruction kernels were not significant, however, a trend towards middle soft tissue kernel could be observed.. Between automated and manual short axis diameter (SAD, RECIST 1.1) and long axis diameter (LAD, RECIST 1.0) no

  15. Clinical lymph node staging—Influence of slice thickness and reconstruction kernel on volumetry and RECIST measurements

    International Nuclear Information System (INIS)

    Fabel, M.; Wulff, A.; Heckel, F.; Bornemann, L.; Freitag-Wolf, S.; Heller, M.; Biederer, J.; Bolte, H.

    2012-01-01

    Purpose: Therapy response evaluation in oncological patient care requires reproducible and accurate image evaluation. Today, common standard in measurement of tumour growth or shrinkage is one-dimensional RECIST 1.1. A proposed alternative method for therapy monitoring is computer aided volumetric analysis. In lung metastases volumetry proved high reliability and accuracy in experimental studies. High reliability and accuracy of volumetry in lung metastases has been proven. However, other metastatic lesions such as enlarged lymph nodes are far more challenging. The aim of this study was to investigate the reproducibility of semi-automated volumetric analysis of lymph node metastases as a function of both slice thickness and reconstruction kernel. In addition, manual long axis diameters (LAD) as well as short axis diameters (SAD) were compared to automated RECIST measurements. Materials and methods: Multislice-CT of the chest, abdomen and pelvis of 15 patients with lymph node metastases of malignant melanoma were included. Raw data were reconstructed using different slice thicknesses (1–5 mm) and varying reconstruction kernels (B20f, B40f, B60f). Volume and RECIST measurements were performed for 85 lymph nodes between 10 and 60 mm using Oncology Prototype Software (Fraunhofer MEVIS, Siemens, Germany) and were compared to a defined reference volume and diameter by calculating absolute percentage errors (APE). Variability of the lymph node sizes was computed as relative measurement differences, precision of measurements was computed as relative measurement deviation. Results: Mean absolute percentage error (APE) for volumetric analysis varied between 3.95% and 13.8% and increased significantly with slice thickness. Differences between reconstruction kernels were not significant, however, a trend towards middle soft tissue kernel could be observed.. Between automated and manual short axis diameter (SAD, RECIST 1.1) and long axis diameter (LAD, RECIST 1.0) no

  16. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  17. Generation of gamma-ray streaming kernels through cylindrical ducts via Monte Carlo method

    International Nuclear Information System (INIS)

    Kim, Dong Su

    1992-02-01

    Since radiation streaming through penetrations is often the critical consideration in protection against exposure of personnel in a nuclear facility, it has been of great concern in radiation shielding design and analysis. Several methods have been developed and applied to the analysis of the radiation streaming in the past such as ray analysis method, single scattering method, albedo method, and Monte Carlo method. But they may be used for order-of-magnitude calculations and where sufficient margin is available, except for the Monte Carlo method which is accurate but requires a lot of computing time. This study developed a Monte Carlo method and constructed a data library of solutions using the Monte Carlo method for radiation streaming through a straight cylindrical duct in concrete walls of a broad, mono-directional, monoenergetic gamma-ray beam of unit intensity. The solution named as plane streaming kernel is the average dose rate at duct outlet and was evaluated for 20 source energies from 0 to 10 MeV, 36 source incident angles from 0 to 70 degrees, 5 duct radii from 10 to 30 cm, and 16 wall thicknesses from 0 to 100 cm. It was demonstrated that average dose rate due to an isotropic point source at arbitrary positions can be well approximated using the plane streaming kernel with acceptable error. Thus, the library of the plane streaming kernels can be used for the accurate and efficient analysis of radiation streaming through a straight cylindrical duct in concrete walls due to arbitrary distributions of gamma-ray sources

  18. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  19. Detecting generalized synchronization of chaotic dynamical systems. A kernel-based method and choice of its parameter

    International Nuclear Information System (INIS)

    Suetani, Hiromichi; Iba, Yukito; Aihara, Kazuyuki

    2006-01-01

    An approach based on the kernel methods for capturing the nonlinear interdependence between two signals is introduced. It is demonstrated that the proposed approach is useful for characterizing generalized synchronization with a successful simple example. An attempt to choose an optimal kernel parameter based on cross validation is also discussed. (author)

  20. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM

    Directory of Open Access Journals (Sweden)

    Ji Li

    2016-10-01

    Full Text Available A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  1. A Temperature Compensation Method for Piezo-Resistive Pressure Sensor Utilizing Chaotic Ions Motion Algorithm Optimized Hybrid Kernel LSSVM.

    Science.gov (United States)

    Li, Ji; Hu, Guoqing; Zhou, Yonghong; Zou, Chong; Peng, Wei; Alam Sm, Jahangir

    2016-10-14

    A piezo-resistive pressure sensor is made of silicon, the nature of which is considerably influenced by ambient temperature. The effect of temperature should be eliminated during the working period in expectation of linear output. To deal with this issue, an approach consists of a hybrid kernel Least Squares Support Vector Machine (LSSVM) optimized by a chaotic ions motion algorithm presented. To achieve the learning and generalization for excellent performance, a hybrid kernel function, constructed by a local kernel as Radial Basis Function (RBF) kernel, and a global kernel as polynomial kernel is incorporated into the Least Squares Support Vector Machine. The chaotic ions motion algorithm is introduced to find the best hyper-parameters of the Least Squares Support Vector Machine. The temperature data from a calibration experiment is conducted to validate the proposed method. With attention on algorithm robustness and engineering applications, the compensation result shows the proposed scheme outperforms other compared methods on several performance measures as maximum absolute relative error, minimum absolute relative error mean and variance of the averaged value on fifty runs. Furthermore, the proposed temperature compensation approach lays a foundation for more extensive research.

  2. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    Science.gov (United States)

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  3. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  4. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  5. Learning with Generalization Capability by Kernel Methods of Bounded Complexity

    Czech Academy of Sciences Publication Activity Database

    Kůrková, Věra; Sanguineti, M.

    2005-01-01

    Roč. 21, č. 3 (2005), s. 350-367 ISSN 0885-064X R&D Projects: GA AV ČR 1ET100300419 Institutional research plan: CEZ:AV0Z10300504 Keywords : supervised learning * generalization * model complexity * kernel methods * minimization of regularized empirical errors * upper bounds on rates of approximate optimization Subject RIV: BA - General Mathematics Impact factor: 1.186, year: 2005

  6. WASTK: A Weighted Abstract Syntax Tree Kernel Method for Source Code Plagiarism Detection

    Directory of Open Access Journals (Sweden)

    Deqiang Fu

    2017-01-01

    Full Text Available In this paper, we introduce a source code plagiarism detection method, named WASTK (Weighted Abstract Syntax Tree Kernel, for computer science education. Different from other plagiarism detection methods, WASTK takes some aspects other than the similarity between programs into account. WASTK firstly transfers the source code of a program to an abstract syntax tree and then gets the similarity by calculating the tree kernel of two abstract syntax trees. To avoid misjudgment caused by trivial code snippets or frameworks given by instructors, an idea similar to TF-IDF (Term Frequency-Inverse Document Frequency in the field of information retrieval is applied. Each node in an abstract syntax tree is assigned a weight by TF-IDF. WASTK is evaluated on different datasets and, as a result, performs much better than other popular methods like Sim and JPlag.

  7. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.

    Science.gov (United States)

    Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin

    2018-03-02

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.

  8. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  9. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  10. Multiple kernel learning using single stage function approximation for binary classification problems

    Science.gov (United States)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  11. Cervical vertebrae maturation method morphologic criteria: poor reproducibility.

    Science.gov (United States)

    Nestman, Trenton S; Marshall, Steven D; Qian, Fang; Holton, Nathan; Franciscus, Robert G; Southard, Thomas E

    2011-08-01

    The cervical vertebrae maturation (CVM) method has been advocated as a predictor of peak mandibular growth. A careful review of the literature showed potential methodologic errors that might influence the high reported reproducibility of the CVM method, and we recently established that the reproducibility of the CVM method was poor when these potential errors were eliminated. The purpose of this study was to further investigate the reproducibility of the individual vertebral patterns. In other words, the purpose was to determine which of the individual CVM vertebral patterns could be classified reliably and which could not. Ten practicing orthodontists, trained in the CVM method, evaluated the morphology of cervical vertebrae C2 through C4 from 30 cephalometric radiographs using questions based on the CVM method. The Fleiss kappa statistic was used to assess interobserver agreement when evaluating each cervical vertebrae morphology question for each subject. The Kendall coefficient of concordance was used to assess the level of interobserver agreement when determining a "derived CVM stage" for each subject. Interobserver agreement was high for assessment of the lower borders of C2, C3, and C4 that were either flat or curved in the CVM method, but interobserver agreement was low for assessment of the vertebral bodies of C3 and C4 when they were either trapezoidal, rectangular horizontal, square, or rectangular vertical; this led to the overall poor reproducibility of the CVM method. These findings were reflected in the Fleiss kappa statistic. Furthermore, nearly 30% of the time, individual morphologic criteria could not be combined to generate a final CVM stage because of incompatible responses to the 5 questions. Intraobserver agreement in this study was only 62%, on average, when the inconclusive stagings were excluded as disagreements. Intraobserver agreement was worse (44%) when the inconclusive stagings were included as disagreements. For the group of subjects

  12. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  13. The Kernel Mixture Network: A Nonparametric Method for Conditional Density Estimation of Continuous Random Variables

    OpenAIRE

    Ambrogioni, Luca; Güçlü, Umut; van Gerven, Marcel A. J.; Maris, Eric

    2017-01-01

    This paper introduces the kernel mixture network, a new method for nonparametric estimation of conditional probability densities using neural networks. We model arbitrarily complex conditional densities as linear combinations of a family of kernel functions centered at a subset of training points. The weights are determined by the outer layer of a deep neural network, trained by minimizing the negative log likelihood. This generalizes the popular quantized softmax approach, which can be seen ...

  14. Kernel based methods for accelerated failure time model with ultra-high dimensional data

    Directory of Open Access Journals (Sweden)

    Jiang Feng

    2010-12-01

    Full Text Available Abstract Background Most genomic data have ultra-high dimensions with more than 10,000 genes (probes. Regularization methods with L1 and Lp penalty have been extensively studied in survival analysis with high-dimensional genomic data. However, when the sample size n ≪ m (the number of genes, directly identifying a small subset of genes from ultra-high (m > 10, 000 dimensional data is time-consuming and not computationally efficient. In current microarray analysis, what people really do is select a couple of thousands (or hundreds of genes using univariate analysis or statistical tests, and then apply the LASSO-type penalty to further reduce the number of disease associated genes. This two-step procedure may introduce bias and inaccuracy and lead us to miss biologically important genes. Results The accelerated failure time (AFT model is a linear regression model and a useful alternative to the Cox model for survival analysis. In this paper, we propose a nonlinear kernel based AFT model and an efficient variable selection method with adaptive kernel ridge regression. Our proposed variable selection method is based on the kernel matrix and dual problem with a much smaller n × n matrix. It is very efficient when the number of unknown variables (genes is much larger than the number of samples. Moreover, the primal variables are explicitly updated and the sparsity in the solution is exploited. Conclusions Our proposed methods can simultaneously identify survival associated prognostic factors and predict survival outcomes with ultra-high dimensional genomic data. We have demonstrated the performance of our methods with both simulation and real data. The proposed method performs superbly with limited computational studies.

  15. Kernel machine methods for integrative analysis of genome-wide methylation and genotyping studies.

    Science.gov (United States)

    Zhao, Ni; Zhan, Xiang; Huang, Yen-Tsung; Almli, Lynn M; Smith, Alicia; Epstein, Michael P; Conneely, Karen; Wu, Michael C

    2018-03-01

    Many large GWAS consortia are expanding to simultaneously examine the joint role of DNA methylation in addition to genotype in the same subjects. However, integrating information from both data types is challenging. In this paper, we propose a composite kernel machine regression model to test the joint epigenetic and genetic effect. Our approach works at the gene level, which allows for a common unit of analysis across different data types. The model compares the pairwise similarities in the phenotype to the pairwise similarities in the genotype and methylation values; and high correspondence is suggestive of association. A composite kernel is constructed to measure the similarities in the genotype and methylation values between pairs of samples. We demonstrate through simulations and real data applications that the proposed approach can correctly control type I error, and is more robust and powerful than using only the genotype or methylation data in detecting trait-associated genes. We applied our method to investigate the genetic and epigenetic regulation of gene expression in response to stressful life events using data that are collected from the Grady Trauma Project. Within the kernel machine testing framework, our methods allow for heterogeneity in effect sizes, nonlinear, and interactive effects, as well as rapid P-value computation. © 2017 WILEY PERIODICALS, INC.

  16. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  17. Multi-view Multi-sparsity Kernel Reconstruction for Multi-class Image Classification

    KAUST Repository

    Zhu, Xiaofeng

    2015-05-28

    This paper addresses the problem of multi-class image classification by proposing a novel multi-view multi-sparsity kernel reconstruction (MMKR for short) model. Given images (including test images and training images) representing with multiple visual features, the MMKR first maps them into a high-dimensional space, e.g., a reproducing kernel Hilbert space (RKHS), where test images are then linearly reconstructed by some representative training images, rather than all of them. Furthermore a classification rule is proposed to classify test images. Experimental results on real datasets show the effectiveness of the proposed MMKR while comparing to state-of-the-art algorithms.

  18. Kernel methods and flexible inference for complex stochastic dynamics

    Science.gov (United States)

    Capobianco, Enrico

    2008-07-01

    Approximation theory suggests that series expansions and projections represent standard tools for random process applications from both numerical and statistical standpoints. Such instruments emphasize the role of both sparsity and smoothness for compression purposes, the decorrelation power achieved in the expansion coefficients space compared to the signal space, and the reproducing kernel property when some special conditions are met. We consider these three aspects central to the discussion in this paper, and attempt to analyze the characteristics of some known approximation instruments employed in a complex application domain such as financial market time series. Volatility models are often built ad hoc, parametrically and through very sophisticated methodologies. But they can hardly deal with stochastic processes with regard to non-Gaussianity, covariance non-stationarity or complex dependence without paying a big price in terms of either model mis-specification or computational efficiency. It is thus a good idea to look at other more flexible inference tools; hence the strategy of combining greedy approximation and space dimensionality reduction techniques, which are less dependent on distributional assumptions and more targeted to achieve computationally efficient performances. Advantages and limitations of their use will be evaluated by looking at algorithmic and model building strategies, and by reporting statistical diagnostics.

  19. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. A kernel-based multivariate feature selection method for microarray data classification.

    Directory of Open Access Journals (Sweden)

    Shiquan Sun

    Full Text Available High dimensionality and small sample sizes, and their inherent risk of overfitting, pose great challenges for constructing efficient classifiers in microarray data classification. Therefore a feature selection technique should be conducted prior to data classification to enhance prediction performance. In general, filter methods can be considered as principal or auxiliary selection mechanism because of their simplicity, scalability, and low computational complexity. However, a series of trivial examples show that filter methods result in less accurate performance because they ignore the dependencies of features. Although few publications have devoted their attention to reveal the relationship of features by multivariate-based methods, these methods describe relationships among features only by linear methods. While simple linear combination relationship restrict the improvement in performance. In this paper, we used kernel method to discover inherent nonlinear correlations among features as well as between feature and target. Moreover, the number of orthogonal components was determined by kernel Fishers linear discriminant analysis (FLDA in a self-adaptive manner rather than by manual parameter settings. In order to reveal the effectiveness of our method we performed several experiments and compared the results between our method and other competitive multivariate-based features selectors. In our comparison, we used two classifiers (support vector machine, [Formula: see text]-nearest neighbor on two group datasets, namely two-class and multi-class datasets. Experimental results demonstrate that the performance of our method is better than others, especially on three hard-classify datasets, namely Wang's Breast Cancer, Gordon's Lung Adenocarcinoma and Pomeroy's Medulloblastoma.

  1. Accuracy of approximations of solutions to Fredholm equations by kernel methods

    Czech Academy of Sciences Publication Activity Database

    Gnecco, G.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 218, č. 14 (2012), s. 7481-7497 ISSN 0096-3003 R&D Projects: GA ČR GAP202/11/1368; GA MŠk OC10047 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 “Complexity of Neural -Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : approximate solutions to integral equations * radial and kernel-based networks * Gaussian kernels * model complexity * analysis of algorithms Subject RIV: IN - Informatics, Computer Science Impact factor: 1.349, year: 2012

  2. A Novel Mittag-Leffler Kernel Based Hybrid Fault Diagnosis Method for Wheeled Robot Driving System

    Directory of Open Access Journals (Sweden)

    Xianfeng Yuan

    2015-01-01

    presents a novel hybrid fault diagnosis framework based on Mittag-Leffler kernel (ML-kernel support vector machine (SVM and Dempster-Shafer (D-S fusion. Using sensor data sampled under different running conditions, the proposed approach initially establishes multiple principal component analysis (PCA models for fault feature extraction. The fault feature vectors are then applied to train the probabilistic SVM (PSVM classifiers that arrive at a preliminary fault diagnosis. To improve the accuracy of preliminary results, a novel ML-kernel based PSVM classifier is proposed in this paper, and the positive definiteness of the ML-kernel is proved as well. The basic probability assignments (BPAs are defined based on the preliminary fault diagnosis results and their confidence values. Eventually, the final fault diagnosis result is archived by the fusion of the BPAs. Experimental results show that the proposed framework not only is capable of detecting and identifying the faults in the robot driving system, but also has better performance in stability and diagnosis accuracy compared with the traditional methods.

  3. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Science.gov (United States)

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  5. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  6. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    International Nuclear Information System (INIS)

    Xiang, Hao; Chen, Bin

    2015-01-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ  = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We 0.28 Fr 0.78 (We is the Weber number, Fr is the Froude number). (paper)

  7. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  8. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  9. Reproducibility Between Brain Uptake Ratio Using Anatomic Standardization and Patlak-Plot Methods.

    Science.gov (United States)

    Shibutani, Takayuki; Onoguchi, Masahisa; Noguchi, Atsushi; Yamada, Tomoki; Tsuchihashi, Hiroko; Nakajima, Tadashi; Kinuya, Seigo

    2015-12-01

    The Patlak-plot and conventional methods of determining brain uptake ratio (BUR) have some problems with reproducibility. We formulated a method of determining BUR using anatomic standardization (BUR-AS) in a statistical parametric mapping algorithm to improve reproducibility. The objective of this study was to demonstrate the inter- and intraoperator reproducibility of mean cerebral blood flow as determined using BUR-AS in comparison to the conventional-BUR (BUR-C) and Patlak-plot methods. The images of 30 patients who underwent brain perfusion SPECT were retrospectively used in this study. The images were reconstructed using ordered-subset expectation maximization and processed using an automatic quantitative analysis for cerebral blood flow of ECD tool. The mean SPECT count was calculated from axial basal ganglia slices of the normal side (slices 31-40) drawn using a 3-dimensional stereotactic region-of-interest template after anatomic standardization. The mean cerebral blood flow was calculated from the mean SPECT count. Reproducibility was evaluated using coefficient of variation and Bland-Altman plotting. For both inter- and intraoperator reproducibility, the BUR-AS method had the lowest coefficient of variation and smallest error range about the Bland-Altman plot. Mean CBF obtained using the BUR-AS method had the highest reproducibility. Compared with the Patlak-plot and BUR-C methods, the BUR-AS method provides greater inter- and intraoperator reproducibility of cerebral blood flow measurement. © 2015 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  10. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  11. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  12. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    International Nuclear Information System (INIS)

    Huang, J; Followill, D; Howell, R; Liu, X; Mirkovic, D; Stingo, F; Kry, S

    2015-01-01

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titanium and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus

  13. Kernel reconstruction methods for Doppler broadening — Temperature interpolation by linear combination of reference cross sections at optimally chosen temperatures

    International Nuclear Information System (INIS)

    Ducru, Pablo; Josey, Colin; Dibert, Karia; Sobes, Vladimir; Forget, Benoit; Smith, Kord

    2017-01-01

    This paper establishes a new family of methods to perform temperature interpolation of nuclear interactions cross sections, reaction rates, or cross sections times the energy. One of these quantities at temperature T is approximated as a linear combination of quantities at reference temperatures (T_j). The problem is formalized in a cross section independent fashion by considering the kernels of the different operators that convert cross section related quantities from a temperature T_0 to a higher temperature T — namely the Doppler broadening operation. Doppler broadening interpolation of nuclear cross sections is thus here performed by reconstructing the kernel of the operation at a given temperature T by means of linear combination of kernels at reference temperatures (T_j). The choice of the L_2 metric yields optimal linear interpolation coefficients in the form of the solutions of a linear algebraic system inversion. The optimization of the choice of reference temperatures (T_j) is then undertaken so as to best reconstruct, in the L∞ sense, the kernels over a given temperature range [T_m_i_n,T_m_a_x]. The performance of these kernel reconstruction methods is then assessed in light of previous temperature interpolation methods by testing them upon isotope "2"3"8U. Temperature-optimized free Doppler kernel reconstruction significantly outperforms all previous interpolation-based methods, achieving 0.1% relative error on temperature interpolation of "2"3"8U total cross section over the temperature range [300 K,3000 K] with only 9 reference temperatures.

  14. The role of Tre6P and SnRK1 in maize early kernel development and events leading to stress-induced kernel abortion.

    Science.gov (United States)

    Bledsoe, Samuel W; Henry, Clémence; Griffiths, Cara A; Paul, Matthew J; Feil, Regina; Lunn, John E; Stitt, Mark; Lagrimini, L Mark

    2017-04-12

    Drought stress during flowering is a major contributor to yield loss in maize. Genetic and biotechnological improvement in yield sustainability requires an understanding of the mechanisms underpinning yield loss. Sucrose starvation has been proposed as the cause for kernel abortion; however, potential targets for genetic improvement have not been identified. Field and greenhouse drought studies with maize are expensive and it can be difficult to reproduce results; therefore, an in vitro kernel culture method is presented as a proxy for drought stress occurring at the time of flowering in maize (3 days after pollination). This method is used to focus on the effects of drought on kernel metabolism, and the role of trehalose 6-phosphate (Tre6P) and the sucrose non-fermenting-1-related kinase (SnRK1) as potential regulators of this response. A precipitous drop in Tre6P is observed during the first two hours after removing the kernels from the plant, and the resulting changes in transcript abundance are indicative of an activation of SnRK1, and an immediate shift from anabolism to catabolism. Once Tre6P levels are depleted to below 1 nmol∙g -1 FW in the kernel, SnRK1 remained active throughout the 96 h experiment, regardless of the presence or absence of sucrose in the medium. Recovery on sucrose enriched medium results in the restoration of sucrose synthesis and glycolysis. Biosynthetic processes including the citric acid cycle and protein and starch synthesis are inhibited by excision, and do not recover even after the re-addition of sucrose. It is also observed that excision induces the transcription of the sugar transporters SUT1 and SWEET1, the sucrose hydrolyzing enzymes CELL WALL INVERTASE 2 (INCW2) and SUCROSE SYNTHASE 1 (SUSY1), the class II TREHALOSE PHOSPHATE SYNTHASES (TPS), TREHALASE (TRE), and TREHALOSE PHOSPHATE PHOSPHATASE (ZmTPPA.3), previously shown to enhance drought tolerance (Nuccio et al., Nat Biotechnol (October 2014):1-13, 2015). The impact

  15. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    Science.gov (United States)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  16. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  17. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  18. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  19. CRKSPH: A new meshfree hydrodynamics method with applications to astrophysics

    Science.gov (United States)

    Owen, John Michael; Raskin, Cody; Frontiere, Nicholas

    2018-01-01

    The study of astrophysical phenomena such as supernovae, accretion disks, galaxy formation, and large-scale structure formation requires computational modeling of, at a minimum, hydrodynamics and gravity. Developing numerical methods appropriate for these kinds of problems requires a number of properties: shock-capturing hydrodynamics benefits from rigorous conservation of invariants such as total energy, linear momentum, and mass; lack of obvious symmetries or a simplified spatial geometry to exploit necessitate 3D methods that ideally are Galilean invariant; the dynamic range of mass and spatial scales that need to be resolved can span many orders of magnitude, requiring methods that are highly adaptable in their space and time resolution. We have developed a new Lagrangian meshfree hydrodynamics method called Conservative Reproducing Kernel Smoothed Particle Hydrodynamics, or CRKSPH, in order to meet these goals. CRKSPH is a conservative generalization of the meshfree reproducing kernel method, combining the high-order accuracy of reproducing kernels with the explicit conservation of mass, linear momentum, and energy necessary to study shock-driven hydrodynamics in compressible fluids. CRKSPH's Lagrangian, particle-like nature makes it simple to combine with well-known N-body methods for modeling gravitation, similar to the older Smoothed Particle Hydrodynamics (SPH) method. Indeed, CRKSPH can be substituted for SPH in existing SPH codes due to these similarities. In comparison to SPH, CRKSPH is able to achieve substantially higher accuracy for a given number of points due to the explicitly consistent (and higher-order) interpolation theory of reproducing kernels, while maintaining the same conservation principles (and therefore applicability) as SPH. There are currently two coded implementations of CRKSPH available: one in the open-source research code Spheral, and the other in the high-performance cosmological code HACC. Using these codes we have applied

  20. A survey of kernel-type estimators for copula and their applications

    Science.gov (United States)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  1. Biorefinery methods for separation of protein and oil fractions from rubber seed kernel

    NARCIS (Netherlands)

    Widyarani, R.; Ratnaningsih, E.; Sanders, J.P.M.; Bruins, M.E.

    2014-01-01

    Biorefinery of rubber seeds can generate additional income for farmers, who already grow rubber trees for latex production. The aim of this study was to find the best method for protein and oil production from rubber seed kernel, with focus on protein recovery. Different pre-treatments and oil

  2. Homotopy deform method for reproducing kernel space for ...

    Indian Academy of Sciences (India)

    2016-09-23

    Sep 23, 2016 ... conditions, have gained considerable attention due to their wide applications in .... Generally speaking, the operator A can be divided into a linear operator L and a .... The explicit representation for- mula for calculating the ...

  3. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  4. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  5. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  6. An Evaluation of Kernel Equating: Parallel Equating with Classical Methods in the SAT Subject Tests[TM] Program. Research Report. ETS RR-09-06

    Science.gov (United States)

    Grant, Mary C.; Zhang, Lilly; Damiano, Michele

    2009-01-01

    This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…

  7. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  8. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  9. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  10. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  11. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  12. A new coupling kernel for the three-dimensional simulation of a boiling water reactor core by the nodal coupling method

    International Nuclear Information System (INIS)

    Gupta, N.K.

    1981-01-01

    A new coupling kernel is developed for the three-dimensional (3-D) simulation of Boiling Water Reactors (BWR's) by the nodal coupling method. The new kernel depends not only on the properties of the node under consideration but also on the properties of its neighbouring nodes. This makes the kernel more useful in particular for fuel bundles lying in a surrounding of different nuclear characteristics, e.g. for a controlled bundle in the surrounding of uncontrolled bundles or vice-versa. The main parameter in the new kernel is a space-dependent factor obtained from the ratio of thermal-to-fast flux. The average value of the above ratio for each node is evaluated analytically. The kernel is incorporated in a 3-D BWR core simulation program MOGS. As an experimental verification of the model, the cycle-6 operations of the two units of the Tarapur Atomic Power Station (TAPS) are simulated and the result of the simulation are compared with Travelling Incore Probe (TIP) data. (orig.)

  13. Novel applications of the temporal kernel method: Historical and future radiative forcing

    Science.gov (United States)

    Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.

    2017-12-01

    We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.

  14. A Comparison of Kernel Equating and Traditional Equipercentile Equating Methods and the Parametric Bootstrap Methods for Estimating Standard Errors in Equipercentile Equating

    Science.gov (United States)

    Choi, Sae Il

    2009-01-01

    This study used simulation (a) to compare the kernel equating method to traditional equipercentile equating methods under the equivalent-groups (EG) design and the nonequivalent-groups with anchor test (NEAT) design and (b) to apply the parametric bootstrap method for estimating standard errors of equating. A two-parameter logistic item response…

  15. The Influence of Reconstruction Kernel on Bone Mineral and Strength Estimates Using Quantitative Computed Tomography and Finite Element Analysis.

    Science.gov (United States)

    Michalski, Andrew S; Edwards, W Brent; Boyd, Steven K

    2017-10-17

    Quantitative computed tomography has been posed as an alternative imaging modality to investigate osteoporosis. We examined the influence of computed tomography convolution back-projection reconstruction kernels on the analysis of bone quantity and estimated mechanical properties in the proximal femur. Eighteen computed tomography scans of the proximal femur were reconstructed using both a standard smoothing reconstruction kernel and a bone-sharpening reconstruction kernel. Following phantom-based density calibration, we calculated typical bone quantity outcomes of integral volumetric bone mineral density, bone volume, and bone mineral content. Additionally, we performed finite element analysis in a standard sideways fall on the hip loading configuration. Significant differences for all outcome measures, except integral bone volume, were observed between the 2 reconstruction kernels. Volumetric bone mineral density measured using images reconstructed by the standard kernel was significantly lower (6.7%, p kernel. Furthermore, the whole-bone stiffness and the failure load measured in images reconstructed by the standard kernel were significantly lower (16.5%, p kernel. These data suggest that for future quantitative computed tomography studies, a standardized reconstruction kernel will maximize reproducibility, independent of the use of a quantitative calibration phantom. Copyright © 2017 The International Society for Clinical Densitometry. Published by Elsevier Inc. All rights reserved.

  16. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    Science.gov (United States)

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  17. The slab albedo problem for the triplet scattering kernel with modified F{sub N} method

    Energy Technology Data Exchange (ETDEWEB)

    Tuereci, Demet [Ministry of Education, 75th year Anatolia High School, Ankara (Turkey)

    2016-12-15

    One speed, time independent neutron transport equation for a slab geometry with the quadratic anisotropic scattering kernel is considered. The albedo and transmission factor are calculated by the modified F{sub N} method. The obtained numerical results are listed for different scattering coefficients.

  18. The CRF-method for semiconductors' intravalley collision kernels: I – the 2D case

    Directory of Open Access Journals (Sweden)

    Claudio Barone

    1992-05-01

    Full Text Available If the collisions are redefined as a flux a kinetic conservation law can be written in divergence form. This can be handled numerically, in the framework of Finite Particle Approximation, using the CRF-method. In the present paper the relevant quantities needed for computer implementation of the CRF-method are derived in the case of a 2D momentum space for the semiconductors' intravalley collision kernels.

  19. The CRF-method for semiconductors' intravalley collision kernels: II – The 3D case

    Directory of Open Access Journals (Sweden)

    Claudio Barone

    1993-05-01

    Full Text Available If the collisions are redefined as a flux a kinetic conservation law can be written in divergence form. This can be handled numerically, in the framework of Finite Particle Approximation, using the CRF-method. In this paper we use the CRF-method for semiconductors' intravalley collision kernels. We extend the results obtained in a previous paper to the case of a 3D momentum space.

  20. Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.

    Science.gov (United States)

    Zhang, Tingting; Kou, S C

    2010-01-01

    Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.

  1. Kernel polynomial method for a nonorthogonal electronic-structure calculation of amorphous diamond

    International Nuclear Information System (INIS)

    Roeder, H.; Silver, R.N.; Drabold, D.A.; Dong, J.J.

    1997-01-01

    The Kernel polynomial method (KPM) has been successfully applied to tight-binding electronic-structure calculations as an O(N) method. Here we extend this method to nonorthogonal basis sets with a sparse overlap matrix S and a sparse Hamiltonian H. Since the KPM method utilizes matrix vector multiplications it is necessary to apply S -1 H onto a vector. The multiplication of S -1 is performed using a preconditioned conjugate-gradient method and does not involve the explicit inversion of S. Hence the method scales the same way as the original KPM method, i.e., O(N), although there is an overhead due to the additional conjugate-gradient part. We apply this method to a large scale electronic-structure calculation of amorphous diamond. copyright 1997 The American Physical Society

  2. Weighted Feature Gaussian Kernel SVM for Emotion Recognition.

    Science.gov (United States)

    Wei, Wei; Jia, Qingxuan

    2016-01-01

    Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.

  3. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  4. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  6. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  7. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  8. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  9. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  10. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  12. Metabolite identification through multiple kernel learning on fragmentation trees.

    Science.gov (United States)

    Shen, Huibin; Dührkop, Kai; Böcker, Sebastian; Rousu, Juho

    2014-06-15

    Metabolite identification from tandem mass spectrometric data is a key task in metabolomics. Various computational methods have been proposed for the identification of metabolites from tandem mass spectra. Fragmentation tree methods explore the space of possible ways in which the metabolite can fragment, and base the metabolite identification on scoring of these fragmentation trees. Machine learning methods have been used to map mass spectra to molecular fingerprints; predicted fingerprints, in turn, can be used to score candidate molecular structures. Here, we combine fragmentation tree computations with kernel-based machine learning to predict molecular fingerprints and identify molecular structures. We introduce a family of kernels capturing the similarity of fragmentation trees, and combine these kernels using recently proposed multiple kernel learning approaches. Experiments on two large reference datasets show that the new methods significantly improve molecular fingerprint prediction accuracy. These improvements result in better metabolite identification, doubling the number of metabolites ranked at the top position of the candidates list. © The Author 2014. Published by Oxford University Press.

  13. Feature Selection and Kernel Learning for Local Learning-Based Clustering.

    Science.gov (United States)

    Zeng, Hong; Cheung, Yiu-ming

    2011-08-01

    The performance of the most clustering algorithms highly relies on the representation of data in the input space or the Hilbert space of kernel methods. This paper is to obtain an appropriate data representation through feature selection or kernel learning within the framework of the Local Learning-Based Clustering (LLC) (Wu and Schölkopf 2006) method, which can outperform the global learning-based ones when dealing with the high-dimensional data lying on manifold. Specifically, we associate a weight to each feature or kernel and incorporate it into the built-in regularization of the LLC algorithm to take into account the relevance of each feature or kernel for the clustering. Accordingly, the weights are estimated iteratively in the clustering process. We show that the resulting weighted regularization with an additional constraint on the weights is equivalent to a known sparse-promoting penalty. Hence, the weights of those irrelevant features or kernels can be shrunk toward zero. Extensive experiments show the efficacy of the proposed methods on the benchmark data sets.

  14. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  15. Probabilistic wind power forecasting based on logarithmic transformation and boundary kernel

    International Nuclear Information System (INIS)

    Zhang, Yao; Wang, Jianxue; Luo, Xu

    2015-01-01

    Highlights: • Quantitative information on the uncertainty of wind power generation. • Kernel density estimator provides non-Gaussian predictive distributions. • Logarithmic transformation reduces the skewness of wind power density. • Boundary kernel method eliminates the density leakage near the boundary. - Abstracts: Probabilistic wind power forecasting not only produces the expectation of wind power output, but also gives quantitative information on the associated uncertainty, which is essential for making better decisions about power system and market operations with the increasing penetration of wind power generation. This paper presents a novel kernel density estimator for probabilistic wind power forecasting, addressing two characteristics of wind power which have adverse impacts on the forecast accuracy, namely, the heavily skewed and double-bounded nature of wind power density. Logarithmic transformation is used to reduce the skewness of wind power density, which improves the effectiveness of the kernel density estimator in a transformed scale. Transformations partially relieve the boundary effect problem of the kernel density estimator caused by the double-bounded nature of wind power density. However, the case study shows that there are still some serious problems of density leakage after the transformation. In order to solve this problem in the transformed scale, a boundary kernel method is employed to eliminate the density leak at the bounds of wind power distribution. The improvement of the proposed method over the standard kernel density estimator is demonstrated by short-term probabilistic forecasting results based on the data from an actual wind farm. Then, a detailed comparison is carried out of the proposed method and some existing probabilistic forecasting methods

  16. Review of Palm Kernel Oil Processing And Storage Techniques In South East Nigeria

    Directory of Open Access Journals (Sweden)

    Okeke CG

    2017-06-01

    Full Text Available An assessment of palm kernel processing and storage in South-Eastern Nigeria was carried out by investigative survey approach. The survey basically ascertained the extent of mechanization applicable in the area to enable the palm kernel processors and agricultural policy makers, device the modalities for improving palm kernel processing in the area. According to the results obtained from the study, in Abia state, 85% of the respondents use mechanical method while 15% use manual method in cracking their kernels. In Imo state, 83% of the processors use mechanical method while 17% use manual method. In Enugu and Ebonyi state, 70% and 50% of the processors respectively use mechanical method. It is only in Anambra state that greater number of the processors (50% use manual method while 45% use mechanical means. It is observable from the results that palm kernel oil extraction has not received much attention in mechanization. The ANOVA of the palm kernel oil extraction technique in South- East Nigeria showed significant difference in both the study area and oil extraction techniques at 5% level of probability. Results further revealed that in Abia State, 70% of the processors use complete fractional process in refining the palm kernel oil; 25% and 5% respectively use incomplete fractional process and zero refining process. In Anambra, 60% of the processors use complete fractional process and 40% use incomplete fractional process. Zero refining method is not applicable in Anambra state. In Enugu sate, 53% use complete fractional process while 25% and 22% respectively use zero refining and incomplete fractional process in refining the palm kernel oil. Imo state, mostly use complete fractional process (85% in refining palm kernel oil. About 10% use zero refining method while 5% of the processors use incomplete fractional process. Plastic containers and metal drums are dominantly used in most areas in south-east Nigeria for the storage of palm kernel oil.

  17. HS-SPME-GC-MS/MS Method for the Rapid and Sensitive Quantitation of 2-Acetyl-1-pyrroline in Single Rice Kernels.

    Science.gov (United States)

    Hopfer, Helene; Jodari, Farman; Negre-Zakharov, Florence; Wylie, Phillip L; Ebeler, Susan E

    2016-05-25

    Demand for aromatic rice varieties (e.g., Basmati) is increasing in the US. Aromatic varieties typically have elevated levels of the aroma compound 2-acetyl-1-pyrroline (2AP). Due to its very low aroma threshold, analysis of 2AP provides a useful screening tool for rice breeders. Methods for 2AP analysis in rice should quantitate 2AP at or below sensory threshold level, avoid artifactual 2AP generation, and be able to analyze single rice kernels in cases where only small sample quantities are available (e.g., breeding trials). We combined headspace solid phase microextraction with gas chromatography tandem mass spectrometry (HS-SPME-GC-MS/MS) for analysis of 2AP, using an extraction temperature of 40 °C and a stable isotopologue as internal standard. 2AP calibrations were linear between the concentrations of 53 and 5380 pg/g, with detection limits below the sensory threshold of 2AP. Forty-eight aromatic and nonaromatic, milled rice samples from three harvest years were screened with the method for their 2AP content, and overall reproducibility, observed for all samples, ranged from 5% for experimental aromatic lines to 33% for nonaromatic lines.

  18. Kernel Clustering with a Differential Harmony Search Algorithm for Scheme Classification

    Directory of Open Access Journals (Sweden)

    Yu Feng

    2017-01-01

    Full Text Available This paper presents a kernel fuzzy clustering with a novel differential harmony search algorithm to coordinate with the diversion scheduling scheme classification. First, we employed a self-adaptive solution generation strategy and differential evolution-based population update strategy to improve the classical harmony search. Second, we applied the differential harmony search algorithm to the kernel fuzzy clustering to help the clustering method obtain better solutions. Finally, the combination of the kernel fuzzy clustering and the differential harmony search is applied for water diversion scheduling in East Lake. A comparison of the proposed method with other methods has been carried out. The results show that the kernel clustering with the differential harmony search algorithm has good performance to cooperate with the water diversion scheduling problems.

  19. The global kernel k-means algorithm for clustering in feature space.

    Science.gov (United States)

    Tzortzis, Grigorios F; Likas, Aristidis C

    2009-07-01

    Kernel k-means is an extension of the standard k -means clustering algorithm that identifies nonlinearly separable clusters. In order to overcome the cluster initialization problem associated with this method, we propose the global kernel k-means algorithm, a deterministic and incremental approach to kernel-based clustering. Our method adds one cluster at each stage, through a global search procedure consisting of several executions of kernel k-means from suitable initializations. This algorithm does not depend on cluster initialization, identifies nonlinearly separable clusters, and, due to its incremental nature and search procedure, locates near-optimal solutions avoiding poor local minima. Furthermore, two modifications are developed to reduce the computational cost that do not significantly affect the solution quality. The proposed methods are extended to handle weighted data points, which enables their application to graph partitioning. We experiment with several data sets and the proposed approach compares favorably to kernel k -means with random restarts.

  20. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  1. Kernel method for air quality modelling. II. Comparison with analytic solutions

    Energy Technology Data Exchange (ETDEWEB)

    Lorimer, G S; Ross, D G

    1986-01-01

    The performance of Lorimer's (1986) kernel method for solving the advection-diffusion equation is tested for instantaneous and continuous emissions into a variety of model atmospheres. Analytical solutions are available for comparison in each case. The results indicate that a modest minicomputer is quite adequate for obtaining satisfactory precision even for the most trying test performed here, which involves a diffusivity tensor and wind speed which are nonlinear functions of the height above ground. Simulations of the same cases by the particle-in-cell technique are found to provide substantially lower accuracy even when use is made of greater computer resources.

  2. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    International Nuclear Information System (INIS)

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  3. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  4. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...

  5. Multiple kernel SVR based on the MRE for remote sensing water depth fusion detection

    Science.gov (United States)

    Wang, Jinjin; Ma, Yi; Zhang, Jingyu

    2018-03-01

    Remote sensing has an important means of water depth detection in coastal shallow waters and reefs. Support vector regression (SVR) is a machine learning method which is widely used in data regression. In this paper, SVR is used to remote sensing multispectral bathymetry. Aiming at the problem that the single-kernel SVR method has a large error in shallow water depth inversion, the mean relative error (MRE) of different water depth is retrieved as a decision fusion factor with single kernel SVR method, a multi kernel SVR fusion method based on the MRE is put forward. And taking the North Island of the Xisha Islands in China as an experimentation area, the comparison experiments with the single kernel SVR method and the traditional multi-bands bathymetric method are carried out. The results show that: 1) In range of 0 to 25 meters, the mean absolute error(MAE)of the multi kernel SVR fusion method is 1.5m,the MRE is 13.2%; 2) Compared to the 4 single kernel SVR method, the MRE of the fusion method reduced 1.2% (1.9%) 3.4% (1.8%), and compared to traditional multi-bands method, the MRE reduced 1.9%; 3) In 0-5m depth section, compared to the single kernel method and the multi-bands method, the MRE of fusion method reduced 13.5% to 44.4%, and the distribution of points is more concentrated relative to y=x.

  6. Differential evolution algorithm-based kernel parameter selection for Fukunaga-Koontz Transform subspaces construction

    Science.gov (United States)

    Binol, Hamidullah; Bal, Abdullah; Cukur, Huseyin

    2015-10-01

    The performance of the kernel based techniques depends on the selection of kernel parameters. That's why; suitable parameter selection is an important problem for many kernel based techniques. This article presents a novel technique to learn the kernel parameters in kernel Fukunaga-Koontz Transform based (KFKT) classifier. The proposed approach determines the appropriate values of kernel parameters through optimizing an objective function constructed based on discrimination ability of KFKT. For this purpose we have utilized differential evolution algorithm (DEA). The new technique overcomes some disadvantages such as high time consumption existing in the traditional cross-validation method, and it can be utilized in any type of data. The experiments for target detection applications on the hyperspectral images verify the effectiveness of the proposed method.

  7. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. TOWARDS FINDING A NEW KERNELIZED FUZZY C-MEANS CLUSTERING ALGORITHM

    Directory of Open Access Journals (Sweden)

    Samarjit Das

    2014-04-01

    Full Text Available Kernelized Fuzzy C-Means clustering technique is an attempt to improve the performance of the conventional Fuzzy C-Means clustering technique. Recently this technique where a kernel-induced distance function is used as a similarity measure instead of a Euclidean distance which is used in the conventional Fuzzy C-Means clustering technique, has earned popularity among research community. Like the conventional Fuzzy C-Means clustering technique this technique also suffers from inconsistency in its performance due to the fact that here also the initial centroids are obtained based on the randomly initialized membership values of the objects. Our present work proposes a new method where we have applied the Subtractive clustering technique of Chiu as a preprocessor to Kernelized Fuzzy CMeans clustering technique. With this new method we have tried not only to remove the inconsistency of Kernelized Fuzzy C-Means clustering technique but also to deal with the situations where the number of clusters is not predetermined. We have also provided a comparison of our method with the Subtractive clustering technique of Chiu and Kernelized Fuzzy C-Means clustering technique using two validity measures namely Partition Coefficient and Clustering Entropy.

  9. Supervised Kernel Optimized Locality Preserving Projection with Its Application to Face Recognition and Palm Biometrics

    Directory of Open Access Journals (Sweden)

    Chuang Lin

    2015-01-01

    Full Text Available Kernel Locality Preserving Projection (KLPP algorithm can effectively preserve the neighborhood structure of the database using the kernel trick. We have known that supervised KLPP (SKLPP can preserve within-class geometric structures by using label information. However, the conventional SKLPP algorithm endures the kernel selection which has significant impact on the performances of SKLPP. In order to overcome this limitation, a method named supervised kernel optimized LPP (SKOLPP is proposed in this paper, which can maximize the class separability in kernel learning. The proposed method maps the data from the original space to a higher dimensional kernel space using a data-dependent kernel. The adaptive parameters of the data-dependent kernel are automatically calculated through optimizing an objective function. Consequently, the nonlinear features extracted by SKOLPP have larger discriminative ability compared with SKLPP and are more adaptive to the input data. Experimental results on ORL, Yale, AR, and Palmprint databases showed the effectiveness of the proposed method.

  10. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  11. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods

    Science.gov (United States)

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E.; Re, Matteo

    2014-01-01

    Objective In the context of “network medicine”, gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. Materials and methods We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. Results The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different “informativeness” embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Conclusions Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further

  12. Fast Gaussian kernel learning for classification tasks based on specially structured global optimization.

    Science.gov (United States)

    Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen

    2014-09-01

    For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  14. Optimal Bandwidth Selection in Observed-Score Kernel Equating

    Science.gov (United States)

    Häggström, Jenny; Wiberg, Marie

    2014-01-01

    The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…

  15. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    Science.gov (United States)

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  17. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  18. Searching remote homology with spectral clustering with symmetry in neighborhood cluster kernels.

    Directory of Open Access Journals (Sweden)

    Ujjwal Maulik

    Full Text Available Remote homology detection among proteins utilizing only the unlabelled sequences is a central problem in comparative genomics. The existing cluster kernel methods based on neighborhoods and profiles and the Markov clustering algorithms are currently the most popular methods for protein family recognition. The deviation from random walks with inflation or dependency on hard threshold in similarity measure in those methods requires an enhancement for homology detection among multi-domain proteins. We propose to combine spectral clustering with neighborhood kernels in Markov similarity for enhancing sensitivity in detecting homology independent of "recent" paralogs. The spectral clustering approach with new combined local alignment kernels more effectively exploits the unsupervised protein sequences globally reducing inter-cluster walks. When combined with the corrections based on modified symmetry based proximity norm deemphasizing outliers, the technique proposed in this article outperforms other state-of-the-art cluster kernels among all twelve implemented kernels. The comparison with the state-of-the-art string and mismatch kernels also show the superior performance scores provided by the proposed kernels. Similar performance improvement also is found over an existing large dataset. Therefore the proposed spectral clustering framework over combined local alignment kernels with modified symmetry based correction achieves superior performance for unsupervised remote homolog detection even in multi-domain and promiscuous domain proteins from Genolevures database families with better biological relevance. Source code available upon request.sarkar@labri.fr.

  19. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  20. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Yokota, Rio; Keyes, David E.

    2017-01-01

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  1. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2017-07-31

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  2. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  3. Covariant Spectator Theory of heavy–light and heavy mesons and the predictive power of covariant interaction kernels

    Energy Technology Data Exchange (ETDEWEB)

    Leitão, Sofia, E-mail: sofia.leitao@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Stadler, Alfred, E-mail: stadler@uevora.pt [Departamento de Física, Universidade de Évora, 7000-671 Évora (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Peña, M.T., E-mail: teresa.pena@tecnico.ulisboa.pt [Departamento de Física, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Biernat, Elmar P., E-mail: elmar.biernat@tecnico.ulisboa.pt [CFTP, Instituto Superior Técnico, Universidade de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2017-01-10

    The Covariant Spectator Theory (CST) is used to calculate the mass spectrum and vertex functions of heavy–light and heavy mesons in Minkowski space. The covariant kernel contains Lorentz scalar, pseudoscalar, and vector contributions. The numerical calculations are performed in momentum space, where special care is taken to treat the strong singularities present in the confining kernel. The observed meson spectrum is very well reproduced after fitting a small number of model parameters. Remarkably, a fit to a few pseudoscalar meson states only, which are insensitive to spin–orbit and tensor forces and do not allow to separate the spin–spin from the central interaction, leads to essentially the same model parameters as a more general fit. This demonstrates that the covariance of the chosen interaction kernel is responsible for the very accurate prediction of the spin-dependent quark–antiquark interactions.

  4. Pencil kernel correction and residual error estimation for quality-index-based dose calculations

    International Nuclear Information System (INIS)

    Nyholm, Tufve; Olofsson, Joergen; Ahnesjoe, Anders; Georg, Dietmar; Karlsson, Mikael

    2006-01-01

    Experimental data from 593 photon beams were used to quantify the errors in dose calculations using a previously published pencil kernel model. A correction of the kernel was derived in order to remove the observed systematic errors. The remaining residual error for individual beams was modelled through uncertainty associated with the kernel model. The methods were tested against an independent set of measurements. No significant systematic error was observed in the calculations using the derived correction of the kernel and the remaining random errors were found to be adequately predicted by the proposed method

  5. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  6. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1982-10-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  7. Kernel-based adaptive learning improves accuracy of glucose predictive modelling in type 1 diabetes: A proof-of-concept study.

    Science.gov (United States)

    Georga, Eleni I; Principe, Jose C; Rizos, Evangelos C; Fotiadis, Dimitrios I

    2017-07-01

    This study aims at demonstrating the need for nonlinear recursive models to the identification and prediction of the dynamic glucose system in type 1 diabetes. Nonlinear regression is performed in a reproducing kernel Hilbert space, by the Approximate Linear Dependency Kernel Recursive Least Squares (KRLS-ALD) algorithm, such that a sparse model structure is accomplished. The method is evaluated on seven people with type 1 diabetes in free-living conditions, where a change in glycaemic dynamics is forced by increasing the level of physical activity in the middle of the observational period. The univariate input allows for short-term (≤30 min) predictions with KRLS-ALD reaching an average root mean square error of 15.22±5.95 mgdL -1 and an average time lag of 17.14±2.67 min for an horizon of 30 min. Its performance is considerably better than that of time-invariant (regularized) linear regression models.

  8. Upport vector machines for nonlinear kernel ARMA system identification.

    Science.gov (United States)

    Martínez-Ramón, Manel; Rojo-Alvarez, José Luis; Camps-Valls, Gustavo; Muñioz-Marí, Jordi; Navia-Vázquez, Angel; Soria-Olivas, Emilio; Figueiras-Vidal, Aníbal R

    2006-11-01

    Nonlinear system identification based on support vector machines (SVM) has been usually addressed by means of the standard SVM regression (SVR), which can be seen as an implicit nonlinear autoregressive and moving average (ARMA) model in some reproducing kernel Hilbert space (RKHS). The proposal of this letter is twofold. First, the explicit consideration of an ARMA model in an RKHS (SVM-ARMA2K) is proposed. We show that stating the ARMA equations in an RKHS leads to solving the regularized normal equations in that RKHS, in terms of the autocorrelation and cross correlation of the (nonlinearly) transformed input and output discrete time processes. Second, a general class of SVM-based system identification nonlinear models is presented, based on the use of composite Mercer's kernels. This general class can improve model flexibility by emphasizing the input-output cross information (SVM-ARMA4K), which leads to straightforward and natural combinations of implicit and explicit ARMA models (SVR-ARMA2K and SVR-ARMA4K). Capabilities of these different SVM-based system identification schemes are illustrated with two benchmark problems.

  9. Generalized synthetic kernel approximation for elastic moderation of fast neutrons

    International Nuclear Information System (INIS)

    Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.

    1975-01-01

    A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)

  10. MR-guided dynamic PET reconstruction with the kernel method and spectral temporal basis functions

    Science.gov (United States)

    Novosad, Philip; Reader, Andrew J.

    2016-06-01

    Recent advances in dynamic positron emission tomography (PET) reconstruction have demonstrated that it is possible to achieve markedly improved end-point kinetic parameter maps by incorporating a temporal model of the radiotracer directly into the reconstruction algorithm. In this work we have developed a highly constrained, fully dynamic PET reconstruction algorithm incorporating both spectral analysis temporal basis functions and spatial basis functions derived from the kernel method applied to a co-registered T1-weighted magnetic resonance (MR) image. The dynamic PET image is modelled as a linear combination of spatial and temporal basis functions, and a maximum likelihood estimate for the coefficients can be found using the expectation-maximization (EM) algorithm. Following reconstruction, kinetic fitting using any temporal model of interest can be applied. Based on a BrainWeb T1-weighted MR phantom, we performed a realistic dynamic [18F]FDG simulation study with two noise levels, and investigated the quantitative performance of the proposed reconstruction algorithm, comparing it with reconstructions incorporating either spectral analysis temporal basis functions alone or kernel spatial basis functions alone, as well as with conventional frame-independent reconstruction. Compared to the other reconstruction algorithms, the proposed algorithm achieved superior performance, offering a decrease in spatially averaged pixel-level root-mean-square-error on post-reconstruction kinetic parametric maps in the grey/white matter, as well as in the tumours when they were present on the co-registered MR image. When the tumours were not visible in the MR image, reconstruction with the proposed algorithm performed similarly to reconstruction with spectral temporal basis functions and was superior to both conventional frame-independent reconstruction and frame-independent reconstruction with kernel spatial basis functions. Furthermore, we demonstrate that a joint spectral/kernel

  11. CLASS-PAIR-GUIDED MULTIPLE KERNEL LEARNING OF INTEGRATING HETEROGENEOUS FEATURES FOR CLASSIFICATION

    Directory of Open Access Journals (Sweden)

    Q. Wang

    2017-10-01

    Full Text Available In recent years, many studies on remote sensing image classification have shown that using multiple features from different data sources can effectively improve the classification accuracy. As a very powerful means of learning, multiple kernel learning (MKL can conveniently be embedded in a variety of characteristics. The conventional combined kernel learned by MKL can be regarded as the compromise of all basic kernels for all classes in classification. It is the best of the whole, but not optimal for each specific class. For this problem, this paper proposes a class-pair-guided MKL method to integrate the heterogeneous features (HFs from multispectral image (MSI and light detection and ranging (LiDAR data. In particular, the one-against-one strategy is adopted, which converts multiclass classification problem to a plurality of two-class classification problem. Then, we select the best kernel from pre-constructed basic kernels set for each class-pair by kernel alignment (KA in the process of classification. The advantage of the proposed method is that only the best kernel for the classification of any two classes can be retained, which leads to greatly enhanced discriminability. Experiments are conducted on two real data sets, and the experimental results show that the proposed method achieves the best performance in terms of classification accuracies in integrating the HFs for classification when compared with several state-of-the-art algorithms.

  12. Kernel Method Based Human Model for Enhancing Interactive Evolutionary Optimization

    Science.gov (United States)

    Zhao, Qiangfu; Liu, Yong

    2015-01-01

    A fitness landscape presents the relationship between individual and its reproductive success in evolutionary computation (EC). However, discrete and approximate landscape in an original search space may not support enough and accurate information for EC search, especially in interactive EC (IEC). The fitness landscape of human subjective evaluation in IEC is very difficult and impossible to model, even with a hypothesis of what its definition might be. In this paper, we propose a method to establish a human model in projected high dimensional search space by kernel classification for enhancing IEC search. Because bivalent logic is a simplest perceptual paradigm, the human model is established by considering this paradigm principle. In feature space, we design a linear classifier as a human model to obtain user preference knowledge, which cannot be supported linearly in original discrete search space. The human model is established by this method for predicting potential perceptual knowledge of human. With the human model, we design an evolution control method to enhance IEC search. From experimental evaluation results with a pseudo-IEC user, our proposed model and method can enhance IEC search significantly. PMID:25879050

  13. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  14. Slab albedo for linearly and quadratically anisotropic scattering kernel with modified F{sub N} method

    Energy Technology Data Exchange (ETDEWEB)

    Tuereci, R. Goekhan [Kirikkale Univ. (Turkey). Kirikkale Vocational School; Tuereci, D. [Ministry of Education, Ankara (Turkey). 75th year Anatolia High School

    2017-11-15

    One speed, time independent and homogeneous medium neutron transport equation is solved with the anisotropic scattering which includes both the linearly and the quadratically anisotropic scattering kernel. Having written Case's eigenfunctions and the orthogonality relations among of these eigenfunctions, slab albedo problem is investigated as numerically by using Modified F{sub N} method. Selected numerical results are presented in tables.

  15. Spatial Modeling Of Infant Mortality Rate In South Central Timor Regency Using GWLR Method With Adaptive Bisquare Kernel And Gaussian Kernel

    Directory of Open Access Journals (Sweden)

    Teguh Prawono Sabat

    2017-08-01

    Full Text Available Geographically Weighted Logistic Regression (GWLR was regression model consider the spatial factor, which could be used to analyze the IMR. The number of Infant Mortality as big as 100 cases in 2015 or 12 per 1000 live birth in South Central Timor Regency. The aim of this study was to determine the best modeling of GWLR with fixed weighting function and Adaptive Gaussian Kernel in the case of infant mortality in South Central Timor District in 2015. The response variable (Y in this study was a case of infant mortality, while variable predictor was the percentage of neonatal first visit (KN1 (X1, the percentage of neonatal visit 3 times (Complete KN (X2, the percentage of pregnant get Fe tablet (X3, percentage of poor families pre prosperous (X4. This was a non-reactive study, which is a measurement which individuals surveyed did not realize that they are part of a study, with analysis unit in 32 sub-districts of South Central Timor Districts. Data analysis used open source program that was Excel, R program, Quantum GIS and GWR4. The best GWLR spatial modeling with Adaptive Gaussian Kernel weighting function, a global model parameters GWLR Adaptive Gaussian Kernel weighting function obtained by g (x = 0.941086 - 0,892506X4, GWLR local models with adaptive Kernel bisquare weighting function in the 13 Districts were obtained g(x = 0 − 0X4, factors that affect the cases of infant mortality in 13 sub-districts of South Central Timor Regency in 2015 was the percentage of poor families pre prosperous.

  16. kernel oil by lipolytic organisms

    African Journals Online (AJOL)

    USER

    2010-08-02

    Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.

  17. System identification via sparse multiple kernel-based regularization using sequential convex optimization techniques

    DEFF Research Database (Denmark)

    Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart

    2014-01-01

    Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels...

  18. A new method by steering kernel-based Richardson–Lucy algorithm for neutron imaging restoration

    International Nuclear Information System (INIS)

    Qiao, Shuang; Wang, Qiao; Sun, Jia-ning; Huang, Ji-peng

    2014-01-01

    Motivated by industrial applications, neutron radiography has become a powerful tool for non-destructive investigation techniques. However, resulted from a combined effect of neutron flux, collimated beam, limited spatial resolution of detector and scattering, etc., the images made with neutrons are degraded severely by blur and noise. For dealing with it, by integrating steering kernel regression into Richardson–Lucy approach, we present a novel restoration method in this paper, which is capable of suppressing noise while restoring details of the blurred imaging result efficiently. Experimental results show that compared with the other methods, the proposed method can improve the restoration quality both visually and quantitatively

  19. Measurement System Analyses - Gauge Repeatability and Reproducibility Methods

    Science.gov (United States)

    Cepova, Lenka; Kovacikova, Andrea; Cep, Robert; Klaput, Pavel; Mizera, Ondrej

    2018-02-01

    The submitted article focuses on a detailed explanation of the average and range method (Automotive Industry Action Group, Measurement System Analysis approach) and of the honest Gauge Repeatability and Reproducibility method (Evaluating the Measurement Process approach). The measured data (thickness of plastic parts) were evaluated by both methods and their results were compared on the basis of numerical evaluation. Both methods were additionally compared and their advantages and disadvantages were discussed. One difference between both methods is the calculation of variation components. The AIAG method calculates the variation components based on standard deviation (then a sum of variation components does not give 100 %) and the honest GRR study calculates the variation components based on variance, where the sum of all variation components (part to part variation, EV & AV) gives the total variation of 100 %. Acceptance of both methods among the professional society, future use, and acceptance by manufacturing industry were also discussed. Nowadays, the AIAG is the leading method in the industry.

  20. Structured Kernel Subspace Learning for Autonomous Robot Navigation.

    Science.gov (United States)

    Kim, Eunwoo; Choi, Sungjoon; Oh, Songhwai

    2018-02-14

    This paper considers two important problems for autonomous robot navigation in a dynamic environment, where the goal is to predict pedestrian motion and control a robot with the prediction for safe navigation. While there are several methods for predicting the motion of a pedestrian and controlling a robot to avoid incoming pedestrians, it is still difficult to safely navigate in a dynamic environment due to challenges, such as the varying quality and complexity of training data with unwanted noises. This paper addresses these challenges simultaneously by proposing a robust kernel subspace learning algorithm based on the recent advances in nuclear-norm and l 1 -norm minimization. We model the motion of a pedestrian and the robot controller using Gaussian processes. The proposed method efficiently approximates a kernel matrix used in Gaussian process regression by learning low-rank structured matrix (with symmetric positive semi-definiteness) to find an orthogonal basis, which eliminates the effects of erroneous and inconsistent data. Based on structured kernel subspace learning, we propose a robust motion model and motion controller for safe navigation in dynamic environments. We evaluate the proposed robust kernel learning in various tasks, including regression, motion prediction, and motion control problems, and demonstrate that the proposed learning-based systems are robust against outliers and outperform existing regression and navigation methods.

  1. Learning a peptide-protein binding affinity predictor with kernel ridge regression

    Science.gov (United States)

    2013-01-01

    Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting

  2. On the logarithmic-singularity correction in the kernel function method of subsonic lifting-surface theory

    Science.gov (United States)

    Lan, C. E.; Lamar, J. E.

    1977-01-01

    A logarithmic-singularity correction factor is derived for use in kernel function methods associated with Multhopp's subsonic lifting-surface theory. Because of the form of the factor, a relation was formulated between the numbers of chordwise and spanwise control points needed for good accuracy. This formulation is developed and discussed. Numerical results are given to show the improvement of the computation with the new correction factor.

  3. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  4. Characterisation and final disposal behaviour of theoria-based fuel kernels in aqueous phases

    International Nuclear Information System (INIS)

    Titov, M.

    2005-08-01

    Two high-temperature reactors (AVR and THTR) operated in Germany have produced about 1 million spent fuel elements. The nuclear fuel in these reactors consists mainly of thorium-uranium mixed oxides, but also pure uranium dioxide and carbide fuels were tested. One of the possible solutions of utilising spent HTR fuel is the direct disposal in deep geological formations. Under such circumstances, the properties of fuel kernels, and especially their leaching behaviour in aqueous phases, have to be investigated for safety assessments of the final repository. In the present work, unirradiated ThO 2 , (Th 0.906 ,U 0.094 )O 2 , (Th 0.834 ,U 0.166 )O 2 and UO 2 fuel kernels were investigated. The composition, crystal structure and surface of the kernels were investigated by traditional methods. Furthermore, a new method was developed for testing the mechanical properties of ceramic kernels. The method was successfully used for the examination of mechanical properties of oxide kernels and for monitoring their evolution during contact with aqueous phases. The leaching behaviour of thoria-based oxide kernels and powders was investigated in repository-relevant salt solutions, as well as in artificial leachates. The influence of different experimental parameters on the kernel leaching stability was investigated. It was shown that thoria-based fuel kernels possess high chemical stability and are indifferent to presence of oxidative and radiolytic species in solution. The dissolution rate of thoria-based materials is typically several orders of magnitude lower than of conventional UO 2 fuel kernels. The life time of a single intact (Th,U)O 2 kernel under aggressive conditions of salt repository was estimated as about hundred thousand years. The importance of grain boundary quality on the leaching stability was demonstrated. Numerical Monte Carlo simulations were performed in order to explain the results of leaching experiments. (orig.)

  5. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  6. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  7. Kernel methods for interpretable machine learning of order parameters

    Science.gov (United States)

    Ponte, Pedro; Melko, Roger G.

    2017-11-01

    Machine learning is capable of discriminating phases of matter, and finding associated phase transitions, directly from large data sets of raw state configurations. In the context of condensed matter physics, most progress in the field of supervised learning has come from employing neural networks as classifiers. Although very powerful, such algorithms suffer from a lack of interpretability, which is usually desired in scientific applications in order to associate learned features with physical phenomena. In this paper, we explore support vector machines (SVMs), which are a class of supervised kernel methods that provide interpretable decision functions. We find that SVMs can learn the mathematical form of physical discriminators, such as order parameters and Hamiltonian constraints, for a set of two-dimensional spin models: the ferromagnetic Ising model, a conserved-order-parameter Ising model, and the Ising gauge theory. The ability of SVMs to provide interpretable classification highlights their potential for automating feature detection in both synthetic and experimental data sets for condensed matter and other many-body systems.

  8. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  9. Soft Sensing of Key State Variables in Fermentation Process Based on Relevance Vector Machine with Hybrid Kernel Function

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available To resolve the online detection difficulty of some important state variables in fermentation process with traditional instruments, a soft sensing modeling method based on relevance vector machine (RVM with a hybrid kernel function is presented. Based on the characteristic analysis of two commonly-used kernel functions, that is, local Gaussian kernel function and global polynomial kernel function, a hybrid kernel function combing merits of Gaussian kernel function and polynomial kernel function is constructed. To design optimal parameters of this kernel function, the particle swarm optimization (PSO algorithm is applied. The proposed modeling method is used to predict the value of cell concentration in the Lysine fermentation process. Simulation results show that the presented hybrid-kernel RVM model has a better accuracy and performance than the single kernel RVM model.

  10. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  11. DNA sequence+shape kernel enables alignment-free modeling of transcription factor binding.

    Science.gov (United States)

    Ma, Wenxiu; Yang, Lin; Rohs, Remo; Noble, William Stafford

    2017-10-01

    Transcription factors (TFs) bind to specific DNA sequence motifs. Several lines of evidence suggest that TF-DNA binding is mediated in part by properties of the local DNA shape: the width of the minor groove, the relative orientations of adjacent base pairs, etc. Several methods have been developed to jointly account for DNA sequence and shape properties in predicting TF binding affinity. However, a limitation of these methods is that they typically require a training set of aligned TF binding sites. We describe a sequence + shape kernel that leverages DNA sequence and shape information to better understand protein-DNA binding preference and affinity. This kernel extends an existing class of k-mer based sequence kernels, based on the recently described di-mismatch kernel. Using three in vitro benchmark datasets, derived from universal protein binding microarrays (uPBMs), genomic context PBMs (gcPBMs) and SELEX-seq data, we demonstrate that incorporating DNA shape information improves our ability to predict protein-DNA binding affinity. In particular, we observe that (i) the k-spectrum + shape model performs better than the classical k-spectrum kernel, particularly for small k values; (ii) the di-mismatch kernel performs better than the k-mer kernel, for larger k; and (iii) the di-mismatch + shape kernel performs better than the di-mismatch kernel for intermediate k values. The software is available at https://bitbucket.org/wenxiu/sequence-shape.git. rohs@usc.edu or william-noble@uw.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  12. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  13. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Pramana – Journal of Physics | News

    Indian Academy of Sciences (India)

    In this paper, the combination of homotopy deform method (HDM) and simplified reproducing kernel method (SRKM) is introduced for solving the boundary value problems (BVPs) of nonlinear differential equations. The solution methodology is based on Adomian decomposition and reproducing kernel method (RKM).

  15. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  16. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  17. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    Science.gov (United States)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  18. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1983-01-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  19. Kernel principal component analysis residual diagnosis (KPCARD): An automated method for cosmic ray artifact removal in Raman spectra

    International Nuclear Information System (INIS)

    Li, Boyan; Calvet, Amandine; Casamayou-Boucau, Yannick; Ryder, Alan G.

    2016-01-01

    A new, fully automated, rapid method, referred to as kernel principal component analysis residual diagnosis (KPCARD), is proposed for removing cosmic ray artifacts (CRAs) in Raman spectra, and in particular for large Raman imaging datasets. KPCARD identifies CRAs via a statistical analysis of the residuals obtained at each wavenumber in the spectra. The method utilizes the stochastic nature of CRAs; therefore, the most significant components in principal component analysis (PCA) of large numbers of Raman spectra should not contain any CRAs. The process worked by first implementing kernel PCA (kPCA) on all the Raman mapping data and second accurately estimating the inter- and intra-spectrum noise to generate two threshold values. CRA identification was then achieved by using the threshold values to evaluate the residuals for each spectrum and assess if a CRA was present. CRA correction was achieved by spectral replacement where, the nearest neighbor (NN) spectrum, most spectroscopically similar to the CRA contaminated spectrum and principal components (PCs) obtained by kPCA were both used to generate a robust, best curve fit to the CRA contaminated spectrum. This best fit spectrum then replaced the CRA contaminated spectrum in the dataset. KPCARD efficacy was demonstrated by using simulated data and real Raman spectra collected from solid-state materials. The results showed that KPCARD was fast ( 1 million) Raman datasets. - Highlights: • New rapid, automatable method for cosmic ray artifact correction of Raman spectra. • Uses combination of kernel PCA and noise estimation for artifact identification. • Implements a best fit spectrum replacement correction approach.

  20. Multi-class Mode of Action Classification of Toxic Compounds Using Logic Based Kernel Methods.

    Science.gov (United States)

    Lodhi, Huma; Muggleton, Stephen; Sternberg, Mike J E

    2010-09-17

    Toxicity prediction is essential for drug design and development of effective therapeutics. In this paper we present an in silico strategy, to identify the mode of action of toxic compounds, that is based on the use of a novel logic based kernel method. The technique uses support vector machines in conjunction with the kernels constructed from first order rules induced by an Inductive Logic Programming system. It constructs multi-class models by using a divide and conquer reduction strategy that splits multi-classes into binary groups and solves each individual problem recursively hence generating an underlying decision list structure. In order to evaluate the effectiveness of the approach for chemoinformatics problems like predictive toxicology, we apply it to toxicity classification in aquatic systems. The method is used to identify and classify 442 compounds with respect to the mode of action. The experimental results show that the technique successfully classifies toxic compounds and can be useful in assessing environmental risks. Experimental comparison of the performance of the proposed multi-class scheme with the standard multi-class Inductive Logic Programming algorithm and multi-class Support Vector Machine yields statistically significant results and demonstrates the potential power and benefits of the approach in identifying compounds of various toxic mechanisms. Copyright © 2010 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  1. Optimizing Multiple Kernel Learning for the Classification of UAV Data

    Directory of Open Access Journals (Sweden)

    Caroline M. Gevaert

    2016-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing high-quality orthoimagery and 3D information in the form of point clouds at a relatively low cost. Their increasing popularity stresses the necessity of understanding which algorithms are especially suited for processing the data obtained from UAVs. The features that are extracted from the point cloud and imagery have different statistical characteristics and can be considered as heterogeneous, which motivates the use of Multiple Kernel Learning (MKL for classification problems. In this paper, we illustrate the utility of applying MKL for the classification of heterogeneous features obtained from UAV data through a case study of an informal settlement in Kigali, Rwanda. Results indicate that MKL can achieve a classification accuracy of 90.6%, a 5.2% increase over a standard single-kernel Support Vector Machine (SVM. A comparison of seven MKL methods indicates that linearly-weighted kernel combinations based on simple heuristics are competitive with respect to computationally-complex, non-linear kernel combination methods. We further underline the importance of utilizing appropriate feature grouping strategies for MKL, which has not been directly addressed in the literature, and we propose a novel, automated feature grouping method that achieves a high classification accuracy for various MKL methods.

  2. Optimal Bandwidth Selection for Kernel Density Functionals Estimation

    Directory of Open Access Journals (Sweden)

    Su Chen

    2015-01-01

    Full Text Available The choice of bandwidth is crucial to the kernel density estimation (KDE and kernel based regression. Various bandwidth selection methods for KDE and local least square regression have been developed in the past decade. It has been known that scale and location parameters are proportional to density functionals ∫γ(xf2(xdx with appropriate choice of γ(x and furthermore equality of scale and location tests can be transformed to comparisons of the density functionals among populations. ∫γ(xf2(xdx can be estimated nonparametrically via kernel density functionals estimation (KDFE. However, the optimal bandwidth selection for KDFE of ∫γ(xf2(xdx has not been examined. We propose a method to select the optimal bandwidth for the KDFE. The idea underlying this method is to search for the optimal bandwidth by minimizing the mean square error (MSE of the KDFE. Two main practical bandwidth selection techniques for the KDFE of ∫γ(xf2(xdx are provided: Normal scale bandwidth selection (namely, “Rule of Thumb” and direct plug-in bandwidth selection. Simulation studies display that our proposed bandwidth selection methods are superior to existing density estimation bandwidth selection methods in estimating density functionals.

  3. The Use of Nonparametric Kernel Regression Methods in Econometric Production Analysis

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard

    and nonparametric estimations of production functions in order to evaluate the optimal firm size. The second paper discusses the use of parametric and nonparametric regression methods to estimate panel data regression models. The third paper analyses production risk, price uncertainty, and farmers' risk preferences...... within a nonparametric panel data regression framework. The fourth paper analyses the technical efficiency of dairy farms with environmental output using nonparametric kernel regression in a semiparametric stochastic frontier analysis. The results provided in this PhD thesis show that nonparametric......This PhD thesis addresses one of the fundamental problems in applied econometric analysis, namely the econometric estimation of regression functions. The conventional approach to regression analysis is the parametric approach, which requires the researcher to specify the form of the regression...

  4. An extensive analysis of disease-gene associations using network integration and fast kernel-based gene prioritization methods.

    Science.gov (United States)

    Valentini, Giorgio; Paccanaro, Alberto; Caniza, Horacio; Romero, Alfonso E; Re, Matteo

    2014-06-01

    In the context of "network medicine", gene prioritization methods represent one of the main tools to discover candidate disease genes by exploiting the large amount of data covering different types of functional relationships between genes. Several works proposed to integrate multiple sources of data to improve disease gene prioritization, but to our knowledge no systematic studies focused on the quantitative evaluation of the impact of network integration on gene prioritization. In this paper, we aim at providing an extensive analysis of gene-disease associations not limited to genetic disorders, and a systematic comparison of different network integration methods for gene prioritization. We collected nine different functional networks representing different functional relationships between genes, and we combined them through both unweighted and weighted network integration methods. We then prioritized genes with respect to each of the considered 708 medical subject headings (MeSH) diseases by applying classical guilt-by-association, random walk and random walk with restart algorithms, and the recently proposed kernelized score functions. The results obtained with classical random walk algorithms and the best single network achieved an average area under the curve (AUC) across the 708 MeSH diseases of about 0.82, while kernelized score functions and network integration boosted the average AUC to about 0.89. Weighted integration, by exploiting the different "informativeness" embedded in different functional networks, outperforms unweighted integration at 0.01 significance level, according to the Wilcoxon signed rank sum test. For each MeSH disease we provide the top-ranked unannotated candidate genes, available for further bio-medical investigation. Network integration is necessary to boost the performances of gene prioritization methods. Moreover the methods based on kernelized score functions can further enhance disease gene ranking results, by adopting both

  5. Kernel-based whole-genome prediction of complex traits: a review.

    Science.gov (United States)

    Morota, Gota; Gianola, Daniel

    2014-01-01

    Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  6. Kernel-based whole-genome prediction of complex traits: a review

    Directory of Open Access Journals (Sweden)

    Gota eMorota

    2014-10-01

    Full Text Available Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways, thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  7. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Gaussian interaction profile kernels for predicting drug-target interaction.

    Science.gov (United States)

    van Laarhoven, Twan; Nabuurs, Sander B; Marchiori, Elena

    2011-11-01

    The in silico prediction of potential interactions between drugs and target proteins is of core importance for the identification of new drugs or novel targets for existing drugs. However, only a tiny portion of all drug-target pairs in current datasets are experimentally validated interactions. This motivates the need for developing computational methods that predict true interaction pairs with high accuracy. We show that a simple machine learning method that uses the drug-target network as the only source of information is capable of predicting true interaction pairs with high accuracy. Specifically, we introduce interaction profiles of drugs (and of targets) in a network, which are binary vectors specifying the presence or absence of interaction with every target (drug) in that network. We define a kernel on these profiles, called the Gaussian Interaction Profile (GIP) kernel, and use a simple classifier, (kernel) Regularized Least Squares (RLS), for prediction drug-target interactions. We test comparatively the effectiveness of RLS with the GIP kernel on four drug-target interaction networks used in previous studies. The proposed algorithm achieves area under the precision-recall curve (AUPR) up to 92.7, significantly improving over results of state-of-the-art methods. Moreover, we show that using also kernels based on chemical and genomic information further increases accuracy, with a neat improvement on small datasets. These results substantiate the relevance of the network topology (in the form of interaction profiles) as source of information for predicting drug-target interactions. Software and Supplementary Material are available at http://cs.ru.nl/~tvanlaarhoven/drugtarget2011/. tvanlaarhoven@cs.ru.nl; elenam@cs.ru.nl. Supplementary data are available at Bioinformatics online.

  9. Sparse kernel orthonormalized PLS for feature extraction in large datasets

    DEFF Research Database (Denmark)

    Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai

    2006-01-01

    In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm...... is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...

  10. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  11. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Directory of Open Access Journals (Sweden)

    Samir Saoudi

    2008-07-01

    Full Text Available The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs. Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE depends directly upon J(f which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of J(f, the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  12. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Science.gov (United States)

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  13. Music recommendation according to human motion based on kernel CCA-based relationship

    Science.gov (United States)

    Ohkushi, Hiroyuki; Ogawa, Takahiro; Haseyama, Miki

    2011-12-01

    In this article, a method for recommendation of music pieces according to human motions based on their kernel canonical correlation analysis (CCA)-based relationship is proposed. In order to perform the recommendation between different types of multimedia data, i.e., recommendation of music pieces from human motions, the proposed method tries to estimate their relationship. Specifically, the correlation based on kernel CCA is calculated as the relationship in our method. Since human motions and music pieces have various time lengths, it is necessary to calculate the correlation between time series having different lengths. Therefore, new kernel functions for human motions and music pieces, which can provide similarities between data that have different time lengths, are introduced into the calculation of the kernel CCA-based correlation. This approach effectively provides a solution to the conventional problem of not being able to calculate the correlation from multimedia data that have various time lengths. Therefore, the proposed method can perform accurate recommendation of best matched music pieces according to a target human motion from the obtained correlation. Experimental results are shown to verify the performance of the proposed method.

  14. Reproducibility of measurement of the environmental carbon-14 samples prepared by the gel suspension method

    International Nuclear Information System (INIS)

    Ohura, Hirotaka; Wakabayashi, Genichiro; Nakamura, Kouji; Okai, Tomio; Matoba, Masaru; Kakiuchi, Hideki; Momoshima, Noriyuki; Kawamura, Hidehisa.

    1997-01-01

    Simple liquid scintillation counting technique for the assay of 14 C in the environment was developed. This technique was done by using gel suspension method, in which sample preparation is very simple and requires no special equipments. The reproducibility of this technique was considered and it was shown that the gel suspension method had enough reproducibility to monitor the environmental 14 C. (author)

  15. Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks.

    Science.gov (United States)

    Oh, S June; Joung, Je-Gun; Chang, Jeong-Ho; Zhang, Byoung-Tak

    2006-06-06

    To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway structures using meta-level information rather than sequence

  16. Construction of phylogenetic trees by kernel-based comparative analysis of metabolic networks

    Directory of Open Access Journals (Sweden)

    Chang Jeong-Ho

    2006-06-01

    Full Text Available Abstract Background To infer the tree of life requires knowledge of the common characteristics of each species descended from a common ancestor as the measuring criteria and a method to calculate the distance between the resulting values of each measure. Conventional phylogenetic analysis based on genomic sequences provides information about the genetic relationships between different organisms. In contrast, comparative analysis of metabolic pathways in different organisms can yield insights into their functional relationships under different physiological conditions. However, evaluating the similarities or differences between metabolic networks is a computationally challenging problem, and systematic methods of doing this are desirable. Here we introduce a graph-kernel method for computing the similarity between metabolic networks in polynomial time, and use it to profile metabolic pathways and to construct phylogenetic trees. Results To compare the structures of metabolic networks in organisms, we adopted the exponential graph kernel, which is a kernel-based approach with a labeled graph that includes a label matrix and an adjacency matrix. To construct the phylogenetic trees, we used an unweighted pair-group method with arithmetic mean, i.e., a hierarchical clustering algorithm. We applied the kernel-based network profiling method in a comparative analysis of nine carbohydrate metabolic networks from 81 biological species encompassing Archaea, Eukaryota, and Eubacteria. The resulting phylogenetic hierarchies generally support the tripartite scheme of three domains rather than the two domains of prokaryotes and eukaryotes. Conclusion By combining the kernel machines with metabolic information, the method infers the context of biosphere development that covers physiological events required for adaptation by genetic reconstruction. The results show that one may obtain a global view of the tree of life by comparing the metabolic pathway

  17. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  18. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  19. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  20. Ensemble-based forecasting at Horns Rev: Ensemble conversion and kernel dressing

    DEFF Research Database (Denmark)

    Pinson, Pierre; Madsen, Henrik

    . The obtained ensemble forecasts of wind power are then converted into predictive distributions with an original adaptive kernel dressing method. The shape of the kernels is driven by a mean-variance model, the parameters of which are recursively estimated in order to maximize the overall skill of obtained...

  1. Research on offense and defense technology for iOS kernel security mechanism

    Science.gov (United States)

    Chu, Sijun; Wu, Hao

    2018-04-01

    iOS is a strong and widely used mobile device system. It's annual profits make up about 90% of the total profits of all mobile phone brands. Though it is famous for its security, there have been many attacks on the iOS operating system, such as the Trident apt attack in 2016. So it is important to research the iOS security mechanism and understand its weaknesses and put forward targeted protection and security check framework. By studying these attacks and previous jailbreak tools, we can see that an attacker could only run a ROP code and gain kernel read and write permissions based on the ROP after exploiting kernel and user layer vulnerabilities. However, the iOS operating system is still protected by the code signing mechanism, the sandbox mechanism, and the not-writable mechanism of the system's disk area. This is far from the steady, long-lasting control that attackers expect. Before iOS 9, breaking these security mechanisms was usually done by modifying the kernel's important data structures and security mechanism code logic. However, after iOS 9, the kernel integrity protection mechanism was added to the 64-bit operating system and none of the previous methods were adapted to the new versions of iOS [1]. But this does not mean that attackers can not break through. Therefore, based on the analysis of the vulnerability of KPP security mechanism, this paper implements two possible breakthrough methods for kernel security mechanism for iOS9 and iOS10. Meanwhile, we propose a defense method based on kernel integrity detection and sensitive API call detection to defense breakthrough method mentioned above. And we make experiments to prove that this method can prevent and detect attack attempts or invaders effectively and timely.

  2. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  3. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  4. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-20

    Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.

  5. GPU-accelerated Kernel Regression Reconstruction for Freehand 3D Ultrasound Imaging.

    Science.gov (United States)

    Wen, Tiexiang; Li, Ling; Zhu, Qingsong; Qin, Wenjian; Gu, Jia; Yang, Feng; Xie, Yaoqin

    2017-07-01

    Volume reconstruction method plays an important role in improving reconstructed volumetric image quality for freehand three-dimensional (3D) ultrasound imaging. By utilizing the capability of programmable graphics processing unit (GPU), we can achieve a real-time incremental volume reconstruction at a speed of 25-50 frames per second (fps). After incremental reconstruction and visualization, hole-filling is performed on GPU to fill remaining empty voxels. However, traditional pixel nearest neighbor-based hole-filling fails to reconstruct volume with high image quality. On the contrary, the kernel regression provides an accurate volume reconstruction method for 3D ultrasound imaging but with the cost of heavy computational complexity. In this paper, a GPU-based fast kernel regression method is proposed for high-quality volume after the incremental reconstruction of freehand ultrasound. The experimental results show that improved image quality for speckle reduction and details preservation can be obtained with the parameter setting of kernel window size of [Formula: see text] and kernel bandwidth of 1.0. The computational performance of the proposed GPU-based method can be over 200 times faster than that on central processing unit (CPU), and the volume with size of 50 million voxels in our experiment can be reconstructed within 10 seconds.

  6. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  7. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi

    2013-08-19

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it\\'s kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  8. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2013-01-01

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it's kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  9. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  10. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    International Nuclear Information System (INIS)

    Khazaee, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine

  11. Computed tomography-based lung nodule volumetry - do optimized reconstructions of routine protocols achieve similar accuracy, reproducibility and interobserver variability to that of special volumetry protocols?

    International Nuclear Information System (INIS)

    Bolte, H.; Riedel, C.; Knoess, N.; Hoffmann, B.; Heller, M.; Biederer, J.; Freitag, S.

    2007-01-01

    Purpose: The aim of this in vitro and ex vivo CT study was to investigate whether the use of a routine thorax protocol (RTP) with optimized reconstruction parameters can provide comparable accuracy, reproducibility and interobserver variability of volumetric analyses to that of a special volumetry protocol (SVP). Materials and Methods: To assess accuracy, 3 polyurethane (PU) spheres (35 HU; diameters: 4, 6 and 10 mm) were examined with a recommended SVP using a multislice CT (collimation 16 x 0.75 mm, pitch 1.25, 20 mAs, slice thickness 1 mm, increment 0.7 mm, medium kernel) and an optimized RTP (collimation 16 x 1.5 mm, pitch 1.25, 100 mAs, reconstructed slice thickness 2 mm, increment 0.4 mm, sharp kernel). For the assessment of intrascan and interscan reproducibility and interobserver variability, 20 artificial small pulmonary nodules were placed in a dedicated ex vivo chest phantom and examined with identical scan protocols. The artificial lesions consisted of a fat-wax-Lipiodol registered mixture. Phantoms and ex vivo lesions were examined afterwards using commercial volumetry software. To describe accuracy the relative deviations from the true volumes of the PU phantoms were calculated. For intrascan and interscan reproducibility and interobserver variability, the 95 % normal range (95 % NR) of relative deviations between two measurements was calculated. Results: For the SVP the achieved relative deviations for the 4, 6 and 10 mm PU phantoms were - 14.3 %, - 12.7 % and - 6.8 % and were 4.5 %, - 0.6 % and - 2.6 %, respectively, for the optimized RTP. SVP showed a 95 % NR of 0 - 1.5 % for intrascan and a 95 % NR of - 10.8 - 2.9 % for interscan reproducibility. The 95 % NR for interobserver variability was - 4.3 - 3.3 %. The optimized RTP achieved a 95 % NR of - 3.1 - 4.3 % for intrascan reproducibility and a 95 % NR of - 7.0 - 3.5 % for interscan reproducibility. The 95 % NR for interobserver variability was - 0.4 - 6.8 %. (orig.)

  12. A reproducible accelerated in vitro release testing method for PLGA microspheres.

    Science.gov (United States)

    Shen, Jie; Lee, Kyulim; Choi, Stephanie; Qu, Wen; Wang, Yan; Burgess, Diane J

    2016-02-10

    The objective of the present study was to develop a discriminatory and reproducible accelerated in vitro release method for long-acting PLGA microspheres with inner structure/porosity differences. Risperidone was chosen as a model drug. Qualitatively and quantitatively equivalent PLGA microspheres with different inner structure/porosity were obtained using different manufacturing processes. Physicochemical properties as well as degradation profiles of the prepared microspheres were investigated. Furthermore, in vitro release testing of the prepared risperidone microspheres was performed using the most common in vitro release methods (i.e., sample-and-separate and flow through) for this type of product. The obtained compositionally equivalent risperidone microspheres had similar drug loading but different inner structure/porosity. When microsphere particle size appeared similar, porous risperidone microspheres showed faster microsphere degradation and drug release compared with less porous microspheres. Both in vitro release methods investigated were able to differentiate risperidone microsphere formulations with differences in porosity under real-time (37 °C) and accelerated (45 °C) testing conditions. Notably, only the accelerated USP apparatus 4 method showed good reproducibility for highly porous risperidone microspheres. These results indicated that the accelerated USP apparatus 4 method is an appropriate fast quality control tool for long-acting PLGA microspheres (even with porous structures). Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  14. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  15. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  16. Fast metabolite identification with Input Output Kernel Regression

    Science.gov (United States)

    Brouard, Céline; Shen, Huibin; Dührkop, Kai; d'Alché-Buc, Florence; Böcker, Sebastian; Rousu, Juho

    2016-01-01

    Motivation: An important problematic of metabolomics is to identify metabolites using tandem mass spectrometry data. Machine learning methods have been proposed recently to solve this problem by predicting molecular fingerprint vectors and matching these fingerprints against existing molecular structure databases. In this work we propose to address the metabolite identification problem using a structured output prediction approach. This type of approach is not limited to vector output space and can handle structured output space such as the molecule space. Results: We use the Input Output Kernel Regression method to learn the mapping between tandem mass spectra and molecular structures. The principle of this method is to encode the similarities in the input (spectra) space and the similarities in the output (molecule) space using two kernel functions. This method approximates the spectra-molecule mapping in two phases. The first phase corresponds to a regression problem from the input space to the feature space associated to the output kernel. The second phase is a preimage problem, consisting in mapping back the predicted output feature vectors to the molecule space. We show that our approach achieves state-of-the-art accuracy in metabolite identification. Moreover, our method has the advantage of decreasing the running times for the training step and the test step by several orders of magnitude over the preceding methods. Availability and implementation: Contact: celine.brouard@aalto.fi Supplementary information: Supplementary data are available at Bioinformatics online. PMID:27307628

  17. Kernel and divergence techniques in high energy physics separations

    Science.gov (United States)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2017-10-01

    Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.

  18. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  19. Analytical Plug-In Method for Kernel Density Estimator Applied to Genetic Neutrality Study

    Science.gov (United States)

    Troudi, Molka; Alimi, Adel M.; Saoudi, Samir

    2008-12-01

    The plug-in method enables optimization of the bandwidth of the kernel density estimator in order to estimate probability density functions (pdfs). Here, a faster procedure than that of the common plug-in method is proposed. The mean integrated square error (MISE) depends directly upon [InlineEquation not available: see fulltext.] which is linked to the second-order derivative of the pdf. As we intend to introduce an analytical approximation of [InlineEquation not available: see fulltext.], the pdf is estimated only once, at the end of iterations. These two kinds of algorithm are tested on different random variables having distributions known for their difficult estimation. Finally, they are applied to genetic data in order to provide a better characterisation in the mean of neutrality of Tunisian Berber populations.

  20. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    Science.gov (United States)

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  1. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    Science.gov (United States)

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.

  2. Convergence Analysis of Generalized Jacobi-Galerkin Methods for Second Kind Volterra Integral Equations with Weakly Singular Kernels

    Directory of Open Access Journals (Sweden)

    Haotao Cai

    2017-01-01

    Full Text Available We develop a generalized Jacobi-Galerkin method for second kind Volterra integral equations with weakly singular kernels. In this method, we first introduce some known singular nonpolynomial functions in the approximation space of the conventional Jacobi-Galerkin method. Secondly, we use the Gauss-Jacobi quadrature rules to approximate the integral term in the resulting equation so as to obtain high-order accuracy for the approximation. Then, we establish that the approximate equation has a unique solution and the approximate solution arrives at an optimal convergence order. One numerical example is presented to demonstrate the effectiveness of the proposed method.

  3. Identification and reproducibility of dietary patterns in a Danish cohort: the Inter99 study.

    Science.gov (United States)

    Lau, Cathrine; Glümer, Charlotte; Toft, Ulla; Tetens, Inge; Carstensen, Bendix; Jørgensen, Torben; Borch-Johnsen, Knut

    2008-05-01

    We aimed to identify dietary patterns in a Danish adult population and assess the reproducibility of the dietary patterns identified. Baseline data of 3,372 women and 3,191 men (30-60 years old) from the population-based survey Inter99 was used. Food intake, assessed by a FFQ, was aggregated into thirty-four separate food groups. Dietary patterns were identified by principal component analysis. Confirmatory factor analysis and Bland Altman plots were used to assess the reproducibility of the dietary patterns identified. The Bland Altman plots were used as an alternative and new method. Two factors were retained for both women and men, which accounted for 15.1-17.4 % of the total variation. The 'Traditional' pattern was characterised by high loadings ( > or = 0.40) on paté or high-fat meat for sandwiches, mayonnaise salads, red meat, potatoes, butter and lard, low-fat fish, low-fat meat for sandwiches, and sauces. The 'Modern' pattern was characterised by high loadings on vegetables, fruit, mixed vegetable dishes, vegetable oil and vinegar dressing, poultry, and pasta, rice and wheat kernels. Small differences were observed between patterns identified for women and men. The root mean square error approximation from the confirmatory factor analysis was 0.08. The variation observed from the Bland Altman plots of factors from explorative v. confirmative analyses and explorative analyses from two sub-samples was between 18.8 and 47.7 %. Pearson's correlation was >0.89 (P < 0.0001). The reproducibility was better for women than for men. We conclude that the 'Traditional' and 'Modern' dietary patterns identified were reproducible.

  4. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  5. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  6. Direct Patlak Reconstruction From Dynamic PET Data Using the Kernel Method With MRI Information Based on Structural Similarity.

    Science.gov (United States)

    Gong, Kuang; Cheng-Liao, Jinxiu; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2018-04-01

    Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.

  7. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  8. Gradient descent for robust kernel-based regression

    Science.gov (United States)

    Guo, Zheng-Chu; Hu, Ting; Shi, Lei

    2018-06-01

    In this paper, we study the gradient descent algorithm generated by a robust loss function over a reproducing kernel Hilbert space (RKHS). The loss function is defined by a windowing function G and a scale parameter σ, which can include a wide range of commonly used robust losses for regression. There is still a gap between theoretical analysis and optimization process of empirical risk minimization based on loss: the estimator needs to be global optimal in the theoretical analysis while the optimization method can not ensure the global optimality of its solutions. In this paper, we aim to fill this gap by developing a novel theoretical analysis on the performance of estimators generated by the gradient descent algorithm. We demonstrate that with an appropriately chosen scale parameter σ, the gradient update with early stopping rules can approximate the regression function. Our elegant error analysis can lead to convergence in the standard L 2 norm and the strong RKHS norm, both of which are optimal in the mini-max sense. We show that the scale parameter σ plays an important role in providing robustness as well as fast convergence. The numerical experiments implemented on synthetic examples and real data set also support our theoretical results.

  9. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  10. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  11. On Improving Convergence Rates for Nonnegative Kernel Density Estimators

    OpenAIRE

    Terrell, George R.; Scott, David W.

    1980-01-01

    To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...

  12. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  13. Phylodynamic Inference with Kernel ABC and Its Application to HIV Epidemiology.

    Science.gov (United States)

    Poon, Art F Y

    2015-09-01

    The shapes of phylogenetic trees relating virus populations are determined by the adaptation of viruses within each host, and by the transmission of viruses among hosts. Phylodynamic inference attempts to reverse this flow of information, estimating parameters of these processes from the shape of a virus phylogeny reconstructed from a sample of genetic sequences from the epidemic. A key challenge to phylodynamic inference is quantifying the similarity between two trees in an efficient and comprehensive way. In this study, I demonstrate that a new distance measure, based on a subset tree kernel function from computational linguistics, confers a significant improvement over previous measures of tree shape for classifying trees generated under different epidemiological scenarios. Next, I incorporate this kernel-based distance measure into an approximate Bayesian computation (ABC) framework for phylodynamic inference. ABC bypasses the need for an analytical solution of model likelihood, as it only requires the ability to simulate data from the model. I validate this "kernel-ABC" method for phylodynamic inference by estimating parameters from data simulated under a simple epidemiological model. Results indicate that kernel-ABC attained greater accuracy for parameters associated with virus transmission than leading software on the same data sets. Finally, I apply the kernel-ABC framework to study a recent outbreak of a recombinant HIV subtype in China. Kernel-ABC provides a versatile framework for phylodynamic inference because it can fit a broader range of models than methods that rely on the computation of exact likelihoods. © The Author 2015. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution.

  14. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  15. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    Science.gov (United States)

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  17. Study of the convergence behavior of the complex kernel least mean square algorithm.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2013-09-01

    The complex kernel least mean square (CKLMS) algorithm is recently derived and allows for online kernel adaptive learning for complex data. Kernel adaptive methods can be used in finding solutions for neural network and machine learning applications. The derivation of CKLMS involved the development of a modified Wirtinger calculus for Hilbert spaces to obtain the cost function gradient. We analyze the convergence of the CKLMS with different kernel forms for complex data. The expressions obtained enable us to generate theory-predicted mean-square error curves considering the circularity of the complex input signals and their effect on nonlinear learning. Simulations are used for verifying the analysis results.

  18. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    International Nuclear Information System (INIS)

    Chamberlain, S; French, S; Nazareth, D

    2016-01-01

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3 × 10"6 to 3 × 10"7); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10"6 was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10"6 have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.

  19. Semi-supervised weighted kernel clustering based on gravitational search for fault diagnosis.

    Science.gov (United States)

    Li, Chaoshun; Zhou, Jianzhong

    2014-09-01

    Supervised learning method, like support vector machine (SVM), has been widely applied in diagnosing known faults, however this kind of method fails to work correctly when new or unknown fault occurs. Traditional unsupervised kernel clustering can be used for unknown fault diagnosis, but it could not make use of the historical classification information to improve diagnosis accuracy. In this paper, a semi-supervised kernel clustering model is designed to diagnose known and unknown faults. At first, a novel semi-supervised weighted kernel clustering algorithm based on gravitational search (SWKC-GS) is proposed for clustering of dataset composed of labeled and unlabeled fault samples. The clustering model of SWKC-GS is defined based on wrong classification rate of labeled samples and fuzzy clustering index on the whole dataset. Gravitational search algorithm (GSA) is used to solve the clustering model, while centers of clusters, feature weights and parameter of kernel function are selected as optimization variables. And then, new fault samples are identified and diagnosed by calculating the weighted kernel distance between them and the fault cluster centers. If the fault samples are unknown, they will be added in historical dataset and the SWKC-GS is used to partition the mixed dataset and update the clustering results for diagnosing new fault. In experiments, the proposed method has been applied in fault diagnosis for rotatory bearing, while SWKC-GS has been compared not only with traditional clustering methods, but also with SVM and neural network, for known fault diagnosis. In addition, the proposed method has also been applied in unknown fault diagnosis. The results have shown effectiveness of the proposed method in achieving expected diagnosis accuracy for both known and unknown faults of rotatory bearing. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Noise reduction by support vector regression with a Ricker wavelet kernel

    International Nuclear Information System (INIS)

    Deng, Xiaoying; Yang, Dinghui; Xie, Jing

    2009-01-01

    We propose a noise filtering technology based on the least-squares support vector regression (LS-SVR), to improve the signal-to-noise ratio (SNR) of seismic data. We modified it by using an admissible support vector (SV) kernel, namely the Ricker wavelet kernel, to replace the conventional radial basis function (RBF) kernel in seismic data processing. We investigated the selection of the regularization parameter for the LS-SVR and derived a concise selecting formula directly from the noisy data. We used the proposed method for choosing the regularization parameter which not only had the advantage of high speed but could also obtain almost the same effectiveness as an optimal parameter method. We conducted experiments using synthetic data corrupted by the random noise of different types and levels, and found that our method was superior to the wavelet transform-based approach and the Wiener filtering. We also applied the method to two field seismic data sets and concluded that it was able to effectively suppress the random noise and improve the data quality in terms of SNR

  1. Noise reduction by support vector regression with a Ricker wavelet kernel

    Science.gov (United States)

    Deng, Xiaoying; Yang, Dinghui; Xie, Jing

    2009-06-01

    We propose a noise filtering technology based on the least-squares support vector regression (LS-SVR), to improve the signal-to-noise ratio (SNR) of seismic data. We modified it by using an admissible support vector (SV) kernel, namely the Ricker wavelet kernel, to replace the conventional radial basis function (RBF) kernel in seismic data processing. We investigated the selection of the regularization parameter for the LS-SVR and derived a concise selecting formula directly from the noisy data. We used the proposed method for choosing the regularization parameter which not only had the advantage of high speed but could also obtain almost the same effectiveness as an optimal parameter method. We conducted experiments using synthetic data corrupted by the random noise of different types and levels, and found that our method was superior to the wavelet transform-based approach and the Wiener filtering. We also applied the method to two field seismic data sets and concluded that it was able to effectively suppress the random noise and improve the data quality in terms of SNR.

  2. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels

    International Nuclear Information System (INIS)

    Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R

    2005-01-01

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm

  3. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  4. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  5. Nonparametric evaluation of dynamic disease risk: a spatio-temporal kernel approach.

    Directory of Open Access Journals (Sweden)

    Zhijie Zhang

    Full Text Available Quantifying the distributions of disease risk in space and time jointly is a key element for understanding spatio-temporal phenomena while also having the potential to enhance our understanding of epidemiologic trajectories. However, most studies to date have neglected time dimension and focus instead on the "average" spatial pattern of disease risk, thereby masking time trajectories of disease risk. In this study we propose a new idea titled "spatio-temporal kernel density estimation (stKDE" that employs hybrid kernel (i.e., weight functions to evaluate the spatio-temporal disease risks. This approach not only can make full use of sample data but also "borrows" information in a particular manner from neighboring points both in space and time via appropriate choice of kernel functions. Monte Carlo simulations show that the proposed method performs substantially better than the traditional (i.e., frequency-based kernel density estimation (trKDE which has been used in applied settings while two illustrative examples demonstrate that the proposed approach can yield superior results compared to the popular trKDE approach. In addition, there exist various possibilities for improving and extending this method.

  6. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  7. Multiple Kernel Learning with Random Effects for Predicting Longitudinal Outcomes and Data Integration

    Science.gov (United States)

    Chen, Tianle; Zeng, Donglin

    2015-01-01

    Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419

  8. The development of the production process for the thorium/uranium dicarbide fuel kernels for the first charge of the Dragon Reactor

    International Nuclear Information System (INIS)

    Burnett, R.C.; Hankart, L.J.; Horsley, G.W.

    1965-05-01

    The development of methods of producing spheroidal sintered porous kernels of hyperstoichiometric thorium/uranium dicarbide solid solution from thorium/uranium monocarbide/carbon and thoria/urania/carbon powder mixes is described. The work has involved study of (i) Methods of preparing green kernels from UC/Th/C powder mixes using the rotary sieve technique. (ii) Methods of producing green kernels from UO2/Th02/C powder mixes using the planetary mill technique. (iii) The conversion by appropriate heat treatment of green kernels produced by both routes to sintered porous kernels of thorium/uranium carbide. (iv) The efficiency of the processes. (author)

  9. Antidiarrhoeal efficacy of Mangifera indica seed kernel on Swiss albino mice.

    Science.gov (United States)

    Rajan, S; Suganya, H; Thirunalasundari, T; Jeeva, S

    2012-08-01

    To examine the antidiarrhoeal activity of alcoholic and aqueous seed kernel extract of Mangifera indica (M. indica) on castor oil-induced diarrhoeal activity in Swiss albino mice. Mango seed kernels were processed and extracted using alcohol and water. Antidiarrhoeal activity of the extracts were assessed using intestinal motility and faecal score methods. Aqueous and alcoholic extracts of M. indica significantly reduced intestinal motility and faecal score in Swiss albino mice. The present study shows the traditional claim on the use of M. indica seed kernel for treating diarrhoea in Southern parts of India. Copyright © 2012 Hainan Medical College. Published by Elsevier B.V. All rights reserved.

  10. Kernel Learning of Histogram of Local Gabor Phase Patterns for Face Recognition

    Directory of Open Access Journals (Sweden)

    Bineng Zhong

    2008-06-01

    Full Text Available This paper proposes a new face recognition method, named kernel learning of histogram of local Gabor phase pattern (K-HLGPP, which is based on Daugman’s method for iris recognition and the local XOR pattern (LXP operator. Unlike traditional Gabor usage exploiting the magnitude part in face recognition, we encode the Gabor phase information for face classification by the quadrant bit coding (QBC method. Two schemes are proposed for face recognition. One is based on the nearest-neighbor classifier with chi-square as the similarity measurement, and the other makes kernel discriminant analysis for HLGPP (K-HLGPP using histogram intersection and Gaussian-weighted chi-square kernels. The comparative experiments show that K-HLGPP achieves a higher recognition rate than other well-known face recognition systems on the large-scale standard FERET, FERET200, and CAS-PEAL-R1 databases.

  11. Method for pulse to pulse dose reproducibility applied to electron linear accelerators

    International Nuclear Information System (INIS)

    Ighigeanu, D.; Martin, D.; Oproiu, C.; Cirstea, E.; Craciun, G.

    2002-01-01

    An original method for obtaining programmed beam single shots and pulse trains with programmed pulse number, pulse repetition frequency, pulse duration and pulse dose is presented. It is particularly useful for automatic control of absorbed dose rate level, irradiation process control as well as in pulse radiolysis studies, single pulse dose measurement or for research experiments where pulse-to-pulse dose reproducibility is required. This method is applied to the electron linear accelerators, ALIN-10 of 6.23 MeV and 82 W and ALID-7, of 5.5 MeV and 670 W, built in NILPRP. In order to implement this method, the accelerator triggering system (ATS) consists of two branches: the gun branch and the magnetron branch. ATS, which synchronizes all the system units, delivers trigger pulses at a programmed repetition rate (up to 250 pulses/s) to the gun (80 kV, 10 A and 4 ms) and magnetron (45 kV, 100 A, and 4 ms).The accelerated electron beam existence is determined by the electron gun and magnetron pulses overlapping. The method consists in controlling the overlapping of pulses in order to deliver the beam in the desired sequence. This control is implemented by a discrete pulse position modulation of gun and/or magnetron pulses. The instabilities of the gun and magnetron transient regimes are avoided by operating the accelerator with no accelerated beam for a certain time. At the operator 'beam start' command, the ATS controls electron gun and magnetron pulses overlapping and the linac beam is generated. The pulse-to-pulse absorbed dose variation is thus considerably reduced. Programmed absorbed dose, irradiation time, beam pulse number or other external events may interrupt the coincidence between the gun and magnetron pulses. Slow absorbed dose variation is compensated by the control of the pulse duration and repetition frequency. Two methods are reported in the electron linear accelerators' development for obtaining the pulse to pulse dose reproducibility: the method

  12. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  13. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    Science.gov (United States)

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  14. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  15. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Science.gov (United States)

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  16. Association test based on SNP set: logistic kernel machine based test vs. principal component analysis.

    Directory of Open Access Journals (Sweden)

    Yang Zhao

    Full Text Available GWAS has facilitated greatly the discovery of risk SNPs associated with complex diseases. Traditional methods analyze SNP individually and are limited by low power and reproducibility since correction for multiple comparisons is necessary. Several methods have been proposed based on grouping SNPs into SNP sets using biological knowledge and/or genomic features. In this article, we compare the linear kernel machine based test (LKM and principal components analysis based approach (PCA using simulated datasets under the scenarios of 0 to 3 causal SNPs, as well as simple and complex linkage disequilibrium (LD structures of the simulated regions. Our simulation study demonstrates that both LKM and PCA can control the type I error at the significance level of 0.05. If the causal SNP is in strong LD with the genotyped SNPs, both the PCA with a small number of principal components (PCs and the LKM with kernel of linear or identical-by-state function are valid tests. However, if the LD structure is complex, such as several LD blocks in the SNP set, or when the causal SNP is not in the LD block in which most of the genotyped SNPs reside, more PCs should be included to capture the information of the causal SNP. Simulation studies also demonstrate the ability of LKM and PCA to combine information from multiple causal SNPs and to provide increased power over individual SNP analysis. We also apply LKM and PCA to analyze two SNP sets extracted from an actual GWAS dataset on non-small cell lung cancer.

  17. An Adaptive Genetic Association Test Using Double Kernel Machines.

    Science.gov (United States)

    Zhan, Xiang; Epstein, Michael P; Ghosh, Debashis

    2015-10-01

    Recently, gene set-based approaches have become very popular in gene expression profiling studies for assessing how genetic variants are related to disease outcomes. Since most genes are not differentially expressed, existing pathway tests considering all genes within a pathway suffer from considerable noise and power loss. Moreover, for a differentially expressed pathway, it is of interest to select important genes that drive the effect of the pathway. In this article, we propose an adaptive association test using double kernel machines (DKM), which can both select important genes within the pathway as well as test for the overall genetic pathway effect. This DKM procedure first uses the garrote kernel machines (GKM) test for the purposes of subset selection and then the least squares kernel machine (LSKM) test for testing the effect of the subset of genes. An appealing feature of the kernel machine framework is that it can provide a flexible and unified method for multi-dimensional modeling of the genetic pathway effect allowing for both parametric and nonparametric components. This DKM approach is illustrated with application to simulated data as well as to data from a neuroimaging genetics study.

  18. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  19. Calculation of dose point kernels for five radionuclides used in radio-immunotherapy

    International Nuclear Information System (INIS)

    Okigaki, S.; Ito, A.; Uchida, I.; Tomaru, T.

    1994-01-01

    With the recent interest in radioimmunotherapy, attention has been given to calculation of dose distribution from beta rays and monoenergetic electrons in tissue. Dose distribution around a point source of a beta ray emitting radioisotope is referred to as a beta dose point kernel. Beta dose point kernels for five radionuclides such as 131 I, 186 Re, 32 P, 188 Re, and 90 Y appropriate for radioimmunotherapy are calculated by Monte Carlo method using the EGS4 code system. Present results were compared with the published data of experiments and other calculations. Accuracy and precisions of beta dose point kernels are discussed. (author)

  20. Introducing etch kernels for efficient pattern sampling and etch bias prediction

    Science.gov (United States)

    Weisbuch, François; Lutich, Andrey; Schatz, Jirka

    2018-01-01

    Successful patterning requires good control of the photolithography and etch processes. While compact litho models, mainly based on rigorous physics, can predict very well the contours printed in photoresist, pure empirical etch models are less accurate and more unstable. Compact etch models are based on geometrical kernels to compute the litho-etch biases that measure the distance between litho and etch contours. The definition of the kernels, as well as the choice of calibration patterns, is critical to get a robust etch model. This work proposes to define a set of independent and anisotropic etch kernels-"internal, external, curvature, Gaussian, z_profile"-designed to represent the finest details of the resist geometry to characterize precisely the etch bias at any point along a resist contour. By evaluating the etch kernels on various structures, it is possible to map their etch signatures in a multidimensional space and analyze them to find an optimal sampling of structures. The etch kernels evaluated on these structures were combined with experimental etch bias derived from scanning electron microscope contours to train artificial neural networks to predict etch bias. The method applied to contact and line/space layers shows an improvement in etch model prediction accuracy over standard etch model. This work emphasizes the importance of the etch kernel definition to characterize and predict complex etch effects.

  1. Delimiting areas of endemism through kernel interpolation.

    Science.gov (United States)

    Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J

    2015-01-01

    We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  2. Delimiting areas of endemism through kernel interpolation.

    Directory of Open Access Journals (Sweden)

    Ubirajara Oliveira

    Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.

  3. Fault Localization for Synchrophasor Data using Kernel Principal Component Analysis

    Directory of Open Access Journals (Sweden)

    CHEN, R.

    2017-11-01

    Full Text Available In this paper, based on Kernel Principal Component Analysis (KPCA of Phasor Measurement Units (PMU data, a nonlinear method is proposed for fault location in complex power systems. Resorting to the scaling factor, the derivative for a polynomial kernel is obtained. Then, the contribution of each variable to the T2 statistic is derived to determine whether a bus is the fault component. Compared to the previous Principal Component Analysis (PCA based methods, the novel version can combat the characteristic of strong nonlinearity, and provide the precise identification of fault location. Computer simulations are conducted to demonstrate the improved performance in recognizing the fault component and evaluating its propagation across the system based on the proposed method.

  4. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  5. COMPARISON OF PARTIAL LEAST SQUARES REGRESSION METHOD ALGORITHMS: NIPALS AND PLS-KERNEL AND AN APPLICATION

    Directory of Open Access Journals (Sweden)

    ELİF BULUT

    2013-06-01

    Full Text Available Partial Least Squares Regression (PLSR is a multivariate statistical method that consists of partial least squares and multiple linear regression analysis. Explanatory variables, X, having multicollinearity are reduced to components which explain the great amount of covariance between explanatory and response variable. These components are few in number and they don’t have multicollinearity problem. Then multiple linear regression analysis is applied to those components to model the response variable Y. There are various PLSR algorithms. In this study NIPALS and PLS-Kernel algorithms will be studied and illustrated on a real data set.

  6. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  7. Reduction of Aflatoxins in Apricot Kernels by Electronic and Manual Color Sorting

    Directory of Open Access Journals (Sweden)

    Rosanna Zivoli

    2016-01-01

    Full Text Available The efficacy of color sorting on reducing aflatoxin levels in shelled apricot kernels was assessed. Naturally-contaminated kernels were submitted to an electronic optical sorter or blanched, peeled, and manually sorted to visually identify and sort discolored kernels (dark and spotted from healthy ones. The samples obtained from the two sorting approaches were ground, homogenized, and analysed by HPLC-FLD for their aflatoxin content. A mass balance approach was used to measure the distribution of aflatoxins in the collected fractions. Aflatoxin B1 and B2 were identified and quantitated in all collected fractions at levels ranging from 1.7 to 22,451.5 µg/kg of AFB1 + AFB2, whereas AFG1 and AFG2 were not detected. Excellent results were obtained by manual sorting of peeled kernels since the removal of discolored kernels (2.6%–19.9% of total peeled kernels removed 97.3%–99.5% of total aflatoxins. The combination of peeling and visual/manual separation of discolored kernels is a feasible strategy to remove 97%–99% of aflatoxins accumulated in naturally-contaminated samples. Electronic optical sorter gave highly variable results since the amount of AFB1 + AFB2 measured in rejected fractions (15%–18% of total kernels ranged from 13% to 59% of total aflatoxins. An improved immunoaffinity-based HPLC-FLD method having low limits of detection for the four aflatoxins (0.01–0.05 µg/kg was developed and used to monitor the occurrence of aflatoxins in 47 commercial products containing apricot kernels and/or almonds commercialized in Italy. Low aflatoxin levels were found in 38% of the tested samples and ranged from 0.06 to 1.50 μg/kg for AFB1 and from 0.06 to 1.79 μg/kg for total aflatoxins.

  8. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  9. Multiple Kernel Learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; AbdulJabbar, Mustafa Abdulmajeed

    2012-01-01

    Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of non-negative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation, which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.

  10. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    KAUST Repository

    Komatitsch, Dimitri; Xie, Zhinan; Bozdağ, Ebru; de Andrade, Elliott Sales; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-01-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  11. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    KAUST Repository

    Komatitsch, Dimitri

    2016-06-13

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  12. Anelastic sensitivity kernels with parsimonious storage for adjoint tomography and full waveform inversion

    Science.gov (United States)

    Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen

    2016-09-01

    We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.

  13. Multiscale Support Vector Learning With Projection Operator Wavelet Kernel for Nonlinear Dynamical System Identification.

    Science.gov (United States)

    Lu, Zhao; Sun, Jing; Butts, Kenneth

    2016-02-03

    A giant leap has been made in the past couple of decades with the introduction of kernel-based learning as a mainstay for designing effective nonlinear computational learning algorithms. In view of the geometric interpretation of conditional expectation and the ubiquity of multiscale characteristics in highly complex nonlinear dynamic systems [1]-[3], this paper presents a new orthogonal projection operator wavelet kernel, aiming at developing an efficient computational learning approach for nonlinear dynamical system identification. In the framework of multiresolution analysis, the proposed projection operator wavelet kernel can fulfill the multiscale, multidimensional learning to estimate complex dependencies. The special advantage of the projection operator wavelet kernel developed in this paper lies in the fact that it has a closed-form expression, which greatly facilitates its application in kernel learning. To the best of our knowledge, it is the first closed-form orthogonal projection wavelet kernel reported in the literature. It provides a link between grid-based wavelets and mesh-free kernel-based methods. Simulation studies for identifying the parallel models of two benchmark nonlinear dynamical systems confirm its superiority in model accuracy and sparsity.

  14. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  15. Phenolic compounds and antioxidant activity of kernels and shells of Mexican pecan (Carya illinoinensis).

    Science.gov (United States)

    de la Rosa, Laura A; Alvarez-Parrilla, Emilio; Shahidi, Fereidoon

    2011-01-12

    The phenolic composition and antioxidant activity of pecan kernels and shells cultivated in three regions of the state of Chihuahua, Mexico, were analyzed. High concentrations of total extractable phenolics, flavonoids, and proanthocyanidins were found in kernels, and 5-20-fold higher concentrations were found in shells. Their concentrations were significantly affected by the growing region. Antioxidant activity was evaluated by ORAC, DPPH•, HO•, and ABTS•-- scavenging (TAC) methods. Antioxidant activity was strongly correlated with the concentrations of phenolic compounds. A strong correlation existed among the results obtained using these four methods. Five individual phenolic compounds were positively identified and quantified in kernels: ellagic, gallic, protocatechuic, and p-hydroxybenzoic acids and catechin. Only ellagic and gallic acids could be identified in shells. Seven phenolic compounds were tentatively identified in kernels by means of MS and UV spectral comparison, namely, protocatechuic aldehyde, (epi)gallocatechin, one gallic acid-glucose conjugate, three ellagic acid derivatives, and valoneic acid dilactone.

  16. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  17. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  18. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  19. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  20. Auto-associative Kernel Regression Model with Weighted Distance Metric for Instrument Drift Monitoring

    International Nuclear Information System (INIS)

    Shin, Ho Cheol; Park, Moon Ghu; You, Skin

    2006-01-01

    Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression

  1. Analyzing kernel matrices for the identification of differentially expressed genes.

    Directory of Open Access Journals (Sweden)

    Xiao-Lei Xia

    Full Text Available One of the most important applications of microarray data is the class prediction of biological samples. For this purpose, statistical tests have often been applied to identify the differentially expressed genes (DEGs, followed by the employment of the state-of-the-art learning machines including the Support Vector Machines (SVM in particular. The SVM is a typical sample-based classifier whose performance comes down to how discriminant samples are. However, DEGs identified by statistical tests are not guaranteed to result in a training dataset composed of discriminant samples. To tackle this problem, a novel gene ranking method namely the Kernel Matrix Gene Selection (KMGS is proposed. The rationale of the method, which roots in the fundamental ideas of the SVM algorithm, is described. The notion of ''the separability of a sample'' which is estimated by performing [Formula: see text]-like statistics on each column of the kernel matrix, is first introduced. The separability of a classification problem is then measured, from which the significance of a specific gene is deduced. Also described is a method of Kernel Matrix Sequential Forward Selection (KMSFS which shares the KMGS method's essential ideas but proceeds in a greedy manner. On three public microarray datasets, our proposed algorithms achieved noticeably competitive performance in terms of the B.632+ error rate.

  2. Predictive analysis and mapping of indoor radon concentrations in a complex environment using kernel estimation: An application to Switzerland

    Energy Technology Data Exchange (ETDEWEB)

    Kropat, Georg, E-mail: georg.kropat@chuv.ch [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Bochud, Francois [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Jaboyedoff, Michel [Faculty of Geosciences and Environment, University of Lausanne, GEOPOLIS — 3793, 1015 Lausanne (Switzerland); Laedermann, Jean-Pascal [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Murith, Christophe; Palacios, Martha [Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland); Baechler, Sébastien [Institute of Radiation Physics, Lausanne University Hospital, Rue du Grand-Pré 1, 1007 Lausanne (Switzerland); Swiss Federal Office of Public Health, Schwarzenburgstrasse 165, 3003 Berne (Switzerland)

    2015-02-01

    Purpose: The aim of this study was to develop models based on kernel regression and probability estimation in order to predict and map IRC in Switzerland by taking into account all of the following: architectural factors, spatial relationships between the measurements, as well as geological information. Methods: We looked at about 240 000 IRC measurements carried out in about 150 000 houses. As predictor variables we included: building type, foundation type, year of construction, detector type, geographical coordinates, altitude, temperature and lithology into the kernel estimation models. We developed predictive maps as well as a map of the local probability to exceed 300 Bq/m{sup 3}. Additionally, we developed a map of a confidence index in order to estimate the reliability of the probability map. Results: Our models were able to explain 28% of the variations of IRC data. All variables added information to the model. The model estimation revealed a bandwidth for each variable, making it possible to characterize the influence of each variable on the IRC estimation. Furthermore, we assessed the mapping characteristics of kernel estimation overall as well as by municipality. Overall, our model reproduces spatial IRC patterns which were already obtained earlier. On the municipal level, we could show that our model accounts well for IRC trends within municipal boundaries. Finally, we found that different building characteristics result in different IRC maps. Maps corresponding to detached houses with concrete foundations indicate systematically smaller IRC than maps corresponding to farms with earth foundation. Conclusions: IRC mapping based on kernel estimation is a powerful tool to predict and analyze IRC on a large-scale as well as on a local level. This approach enables to develop tailor-made maps for different architectural elements and measurement conditions and to account at the same time for geological information and spatial relations between IRC measurements

  3. A multiparametric automatic method to monitor long-term reproducibility in digital mammography: results from a regional screening programme.

    Science.gov (United States)

    Gennaro, G; Ballaminut, A; Contento, G

    2017-09-01

    This study aims to illustrate a multiparametric automatic method for monitoring long-term reproducibility of digital mammography systems, and its application on a large scale. Twenty-five digital mammography systems employed within a regional screening programme were controlled weekly using the same type of phantom, whose images were analysed by an automatic software tool. To assess system reproducibility levels, 15 image quality indices (IQIs) were extracted and compared with the corresponding indices previously determined by a baseline procedure. The coefficients of variation (COVs) of the IQIs were used to assess the overall variability. A total of 2553 phantom images were collected from the 25 digital mammography systems from March 2013 to December 2014. Most of the systems showed excellent image quality reproducibility over the surveillance interval, with mean variability below 5%. Variability of each IQI was 5%, with the exception of one index associated with the smallest phantom objects (0.25 mm), which was below 10%. The method applied for reproducibility tests-multi-detail phantoms, cloud automatic software tool to measure multiple image quality indices and statistical process control-was proven to be effective and applicable on a large scale and to any type of digital mammography system. • Reproducibility of mammography image quality should be monitored by appropriate quality controls. • Use of automatic software tools allows image quality evaluation by multiple indices. • System reproducibility can be assessed comparing current index value with baseline data. • Overall system reproducibility of modern digital mammography systems is excellent. • The method proposed and applied is cost-effective and easily scalable.

  4. Systems-based biological concordance and predictive reproducibility of gene set discovery methods in cardiovascular disease.

    Science.gov (United States)

    Azuaje, Francisco; Zheng, Huiru; Camargo, Anyela; Wang, Haiying

    2011-08-01

    The discovery of novel disease biomarkers is a crucial challenge for translational bioinformatics. Demonstration of both their classification power and reproducibility across independent datasets are essential requirements to assess their potential clinical relevance. Small datasets and multiplicity of putative biomarker sets may explain lack of predictive reproducibility. Studies based on pathway-driven discovery approaches have suggested that, despite such discrepancies, the resulting putative biomarkers tend to be implicated in common biological processes. Investigations of this problem have been mainly focused on datasets derived from cancer research. We investigated the predictive and functional concordance of five methods for discovering putative biomarkers in four independently-generated datasets from the cardiovascular disease domain. A diversity of biosignatures was identified by the different methods. However, we found strong biological process concordance between them, especially in the case of methods based on gene set analysis. With a few exceptions, we observed lack of classification reproducibility using independent datasets. Partial overlaps between our putative sets of biomarkers and the primary studies exist. Despite the observed limitations, pathway-driven or gene set analysis can predict potentially novel biomarkers and can jointly point to biomedically-relevant underlying molecular mechanisms. Copyright © 2011 Elsevier Inc. All rights reserved.

  5. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  6. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  7. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  8. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  9. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  10. Intermediate Compound Preparation Using Modified External Gelation Method and Thermal Treatment Equipment Development for UCO Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Kyung Chai; Eom, Sung Ho; Kim, Yeon Ku; Yeo, Seoung Hwan; Kim, Young Min; Cho, Moon Sung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2016-10-15

    VHTR (Very High Temperature Gas Reactor) fuel technology is being actively developed in the US, China, Japan, and Korea for a Next Generation Nuclear Plant (NGNP). The concept of fuel of a VHTR is based on a sphere kernel of UO{sub 2} or UCO, with multiple coating layers to create a gas-tight particle. The fuel particle of a VHTR in the US is based on microspheres containing a UCO, mixture compound of UO{sub 2} and UC{sub 2} , coated particles with multi carbon layers and a SiC layer. This was first prepared through an internal gelation method at ORNL in the late 1970s. This study presents; (1) C-ADU gel particles were prepared using a modified sol-gel process. The particles fabricated with a KAERI-established gelation and AWD processes showed good sphericity and no cracks were found on the surfaces. (2) High temperature rotating furnace was designed and fabricated in our laboratory, and the maximum operation temperature was about 2000℃. The furnace was equipped with Mo crucible and graphite heating system, and now it is being operated. (3) Well-prepared C-ADU gel particles were converted into UCO compounds using high temperature rotating furnace, and the physical properties of the UCO kernels will be analyzed.

  11. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  12. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  13. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  14. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  15. The Visualization and Analysis of POI Features under Network Space Supported by Kernel Density Estimation

    Directory of Open Access Journals (Sweden)

    YU Wenhao

    2015-01-01

    Full Text Available The distribution pattern and the distribution density of urban facility POIs are of great significance in the fields of infrastructure planning and urban spatial analysis. The kernel density estimation, which has been usually utilized for expressing these spatial characteristics, is superior to other density estimation methods (such as Quadrat analysis, Voronoi-based method, for that the Kernel density estimation considers the regional impact based on the first law of geography. However, the traditional kernel density estimation is mainly based on the Euclidean space, ignoring the fact that the service function and interrelation of urban feasibilities is carried out on the network path distance, neither than conventional Euclidean distance. Hence, this research proposed a computational model of network kernel density estimation, and the extension type of model in the case of adding constraints. This work also discussed the impacts of distance attenuation threshold and height extreme to the representation of kernel density. The large-scale actual data experiment for analyzing the different POIs' distribution patterns (random type, sparse type, regional-intensive type, linear-intensive type discusses the POI infrastructure in the city on the spatial distribution of characteristics, influence factors, and service functions.

  16. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  17. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  18. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  19. Straight-chain halocarbon forming fluids for TRISO fuel kernel production – Tests with yttria-stabilized zirconia microspheres

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M.P. [Nuclear Science and Engineering Program, Metallurgical and Materials Engineering Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Nuclear Science and Engineering Program, Metallurgical and Materials Engineering Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Metallurgical and Materials Engineering Department, Colorado Center for Advanced Ceramics, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Braley, J.C. [Nuclear Science and Engineering Program, Chemistry and Geochemistry Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States)

    2015-03-15

    Highlights: • YSZ TRISO kernels formed in three alternative, non-hazardous forming fluids. • Kernels characterized for size, shape, pore/grain size, density, and composition. • Bromotetradecane is suitable for further investigation with uranium-based precursor. - Abstract: Current methods of TRISO fuel kernel production in the United States use a sol–gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  20. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  1. A Kernel for Protein Secondary Structure Prediction

    OpenAIRE

    Guermeur , Yann; Lifchitz , Alain; Vert , Régis

    2004-01-01

    http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...

  2. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  3. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  4. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  5. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  6. Single corn kernel wide-line NMR oil analysis for breeding purpose

    Energy Technology Data Exchange (ETDEWEB)

    Wilmers, M C.C.; Rettori, C; Vargas, H; Barberis, G E [Universidade Estadual de Campinas (Brazil). Inst. de Fisica; da Silva, W J [Universidade Estadual de Campinas (Brazil). Inst. de Biologia

    1978-12-01

    The Wide-Line NMR technique was used to determine the oil content in single corn seeds. Using distinct radio frequency (RF) power, a systematic work was done in kernels with about 10% of moisture, and also in artificially dried seeds with approximated 5% of moisture. For nondried seeds NMR spectra showed clearly the presence of three resonances with different RF saturation factor. For dried seeds, the oil concentration determined by NMR was highly correlated (r = 0,997) with that determined by a gravimetric method. The highest discrepancy between the two methods was found to be about 1,3%. When relative measurements are required as in the case of single kernel for recurrent selection program, precision in the individual selected kernel will be about 2,5%. Applying this technique, a first cycle of recurrent selection using S/sub 1/ lines for low and high oil content was performed in an open pollinated variety. Gain from selection was 12.0 and 14.1% in the populations for high and low oil contents, respectively.

  7. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  8. Relationship between processing score and kernel-fraction particle size in whole-plant corn silage.

    Science.gov (United States)

    Dias Junior, G S; Ferraretto, L F; Salvati, G G S; de Resende, L C; Hoffman, P C; Pereira, M N; Shaver, R D

    2016-04-01

    Kernel processing increases starch digestibility in whole-plant corn silage (WPCS). Corn silage processing score (CSPS), the percentage of starch passing through a 4.75-mm sieve, is widely used to assess degree of kernel breakage in WPCS. However, the geometric mean particle size (GMPS) of the kernel-fraction that passes through the 4.75-mm sieve has not been well described. Therefore, the objectives of this study were (1) to evaluate particle size distribution and digestibility of kernels cut in varied particle sizes; (2) to propose a method to measure GMPS in WPCS kernels; and (3) to evaluate the relationship between CSPS and GMPS of the kernel fraction in WPCS. Composite samples of unfermented, dried kernels from 110 corn hybrids commonly used for silage production were kept whole (WH) or manually cut in 2, 4, 8, 16, 32 or 64 pieces (2P, 4P, 8P, 16P, 32P, and 64P, respectively). Dry sieving to determine GMPS, surface area, and particle size distribution using 9 sieves with nominal square apertures of 9.50, 6.70, 4.75, 3.35, 2.36, 1.70, 1.18, and 0.59 mm and pan, as well as ruminal in situ dry matter (DM) digestibilities were performed for each kernel particle number treatment. Incubation times were 0, 3, 6, 12, and 24 h. The ruminal in situ DM disappearance of unfermented kernels increased with the reduction in particle size of corn kernels. Kernels kept whole had the lowest ruminal DM disappearance for all time points with maximum DM disappearance of 6.9% at 24 h and the greatest disappearance was observed for 64P, followed by 32P and 16P. Samples of WPCS (n=80) from 3 studies representing varied theoretical length of cut settings and processor types and settings were also evaluated. Each WPCS sample was divided in 2 and then dried at 60 °C for 48 h. The CSPS was determined in duplicate on 1 of the split samples, whereas on the other split sample the kernel and stover fractions were separated using a hydrodynamic separation procedure. After separation, the

  9. Integral equations with difference kernels on finite intervals

    CERN Document Server

    Sakhnovich, Lev A

    2015-01-01

    This book focuses on solving integral equations with difference kernels on finite intervals. The corresponding problem on the semiaxis was previously solved by N. Wiener–E. Hopf and by M.G. Krein. The problem on finite intervals, though significantly more difficult, may be solved using our method of operator identities. This method is also actively employed in inverse spectral problems, operator factorization and nonlinear integral equations. Applications of the obtained results to optimal synthesis, light scattering, diffraction, and hydrodynamics problems are discussed in this book, which also describes how the theory of operators with difference kernels is applied to stable processes and used to solve the famous M. Kac problems on stable processes. In this second edition these results are extensively generalized and include the case of all Levy processes. We present the convolution expression for the well-known Ito formula of the generator operator, a convolution expression that has proven to be fruitful...

  10. A method to reproduce alpha-particle spectra measured with semiconductor detectors.

    Science.gov (United States)

    Timón, A Fernández; Vargas, M Jurado; Sánchez, A Martín

    2010-01-01

    A method is proposed to reproduce alpha-particle spectra measured with silicon detectors, combining analytical and computer simulation techniques. The procedure includes the use of the Monte Carlo method to simulate the tracks of alpha-particles within the source and in the detector entrance window. The alpha-particle spectrum is finally obtained by the convolution of this simulated distribution and the theoretical distributions representing the contributions of the alpha-particle spectrometer to the spectrum. Experimental spectra from (233)U and (241)Am sources were compared with the predictions given by the proposed procedure, showing good agreement. The proposed method can be an important aid for the analysis and deconvolution of complex alpha-particle spectra. Copyright 2009 Elsevier Ltd. All rights reserved.

  11. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  12. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  13. Retrieval of collision kernels from the change of droplet size distributions with linear inversion

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Ryo; Takahashi, Keiko [Earth Simulator Center, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama Kanagawa 236-0001 (Japan); Matsuda, Keigo; Kurose, Ryoichi; Komori, Satoru [Department of Mechanical Engineering and Science, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: onishi.ryo@jamstec.go.jp, E-mail: matsuda.keigo@t03.mbox.media.kyoto-u.ac.jp, E-mail: takahasi@jamstec.go.jp, E-mail: kurose@mech.kyoto-u.ac.jp, E-mail: komori@mech.kyoto-u.ac.jp

    2008-12-15

    We have developed a new simple inversion scheme for retrieving collision kernels from the change of droplet size distribution due to collision growth. Three-dimensional direct numerical simulations (DNS) of steady isotropic turbulence with colliding droplets are carried out in order to investigate the validity of the developed inversion scheme. In the DNS, air turbulence is calculated using a quasi-spectral method; droplet motions are tracked in a Lagrangian manner. The initial droplet size distribution is set to be equivalent to that obtained in a wind tunnel experiment. Collision kernels retrieved by the developed inversion scheme are compared to those obtained by the DNS. The comparison shows that the collision kernels can be retrieved within 15% error. This verifies the feasibility of retrieving collision kernels using the present inversion scheme.

  14. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  15. Efficient searching in meshfree methods

    Science.gov (United States)

    Olliff, James; Alford, Brad; Simkins, Daniel C.

    2018-04-01

    Meshfree methods such as the Reproducing Kernel Particle Method and the Element Free Galerkin method have proven to be excellent choices for problems involving complex geometry, evolving topology, and large deformation, owing to their ability to model the problem domain without the constraints imposed on the Finite Element Method (FEM) meshes. However, meshfree methods have an added computational cost over FEM that come from at least two sources: increased cost of shape function evaluation and the determination of adjacency or connectivity. The focus of this paper is to formally address the types of adjacency information that arises in various uses of meshfree methods; a discussion of available techniques for computing the various adjacency graphs; propose a new search algorithm and data structure; and finally compare the memory and run time performance of the methods.

  16. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  17. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  18. Moisture Adsorption Isotherm and Storability of Hazelnut Inshells and Kernels Produced in Oregon, USA.

    Science.gov (United States)

    Jung, Jooyeoun; Wang, Wenjie; McGorrin, Robert J; Zhao, Yanyun

    2018-02-01

    Moisture adsorption isotherms and storability of dried hazelnut inshells and kernels produced in Oregon were evaluated and compared among cultivars, including Barcelona, Yamhill, and Jefferson. Experimental moisture adsorption data fitted to Guggenheim-Anderson-de Boer (GAB) model, showing less hygroscopic properties in Yamhill than other cultivars of inshells and kernels due to lower content of carbohydrate and protein, but higher content of fat. The safe levels of moisture content (MC, dry basis) of dried inshells and kernels for reaching kernel water activity (a w ) ≤0.65 were estimated using the GAB model as 11.3% and 5.0% for Barcelona, 9.4% and 4.2% for Yamhill, and 10.7% and 4.9% for Jefferson, respectively. Storage conditions (2 °C at 85% to 95% relative humidity [RH], 10 °C at 65% to 75% RH, and 27 °C at 35% to 45% RH), times (0, 4, 8, or 12 mo), and packaging methods (atmosphere vs. vacuum) affected MC, a w , bioactive compounds, lipid oxidation, and enzyme activity of dried hazelnut inshells or kernels. For inshells packaged at woven polypropylene bag, MC and a w of inshells and kernels (inside shells) increased at 2 and 10 °C, but decreased at 27 °C during storage. For kernels, lipid oxidation and polyphenol oxidase activity also increased with extended storage time (P adsorption and physicochemical and enzymatic stability during storage. Moisture adsorption isotherm of hazelnut inshells and kernels is useful for predicting the storability of nuts. This study found that water adsorption and storability varied among the different cultivars of nuts, in which Yamhill was less hygroscopic than Barcelona and Jefferson, thus more stable during storage. For ensuring food safety and quality of nuts during storage, each cultivar of kernels should be dried to a certain level of MC. Lipid oxidation and enzyme activity of kernel could be increased with extended storage time. Vacuum packaging was recommended to kernels for reducing moisture adsorption

  19. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  20. Theoretical developments for interpreting kernel spectral clustering from alternative viewpoints

    Directory of Open Access Journals (Sweden)

    Diego Peluffo-Ordóñez

    2017-08-01

    Full Text Available To perform an exploration process over complex structured data within unsupervised settings, the so-called kernel spectral clustering (KSC is one of the most recommended and appealing approaches, given its versatility and elegant formulation. In this work, we explore the relationship between (KSC and other well-known approaches, namely normalized cut clustering and kernel k-means. To do so, we first deduce a generic KSC model from a primal-dual formulation based on least-squares support-vector machines (LS-SVM. For experiments, KSC as well as other consider methods are assessed on image segmentation tasks to prove their usability.

  1. Determining the number of clusters for kernelized fuzzy C-means algorithms for automatic medical image segmentation

    Directory of Open Access Journals (Sweden)

    E.A. Zanaty

    2012-03-01

    Full Text Available In this paper, we determine the suitable validity criterion of kernelized fuzzy C-means and kernelized fuzzy C-means with spatial constraints for automatic segmentation of magnetic resonance imaging (MRI. For that; the original Euclidean distance in the FCM is replaced by a Gaussian radial basis function classifier (GRBF and the corresponding algorithms of FCM methods are derived. The derived algorithms are called as the kernelized fuzzy C-means (KFCM and kernelized fuzzy C-means with spatial constraints (SKFCM. These methods are implemented on eighteen indexes as validation to determine whether indexes are capable to acquire the optimal clusters number. The performance of segmentation is estimated by applying these methods independently on several datasets to prove which method can give good results and with which indexes. Our test spans various indexes covering the classical and the rather more recent indexes that have enjoyed noticeable success in that field. These indexes are evaluated and compared by applying them on various test images, including synthetic images corrupted with noise of varying levels, and simulated volumetric MRI datasets. Comparative analysis is also presented to show whether the validity index indicates the optimal clustering for our datasets.

  2. A shortest-path graph kernel for estimating gene product semantic similarity

    Directory of Open Access Journals (Sweden)

    Alvarez Marco A

    2011-07-01

    Full Text Available Abstract Background Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. Results We present a shortest-path graph kernel (spgk method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Conclusions Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.

  3. An Exact Solution of the Binary Singular Problem

    Directory of Open Access Journals (Sweden)

    Baiqing Sun

    2014-01-01

    Full Text Available Singularity problem exists in various branches of applied mathematics. Such ordinary differential equations accompany singular coefficients. In this paper, by using the properties of reproducing kernel, the exact solution expressions of dual singular problem are given in the reproducing kernel space and studied, also for a class of singular problem. For the binary equation of singular points, I put it into the singular problem first, and then reuse some excellent properties which are applied to solve the method of solving differential equations for its exact solution expression of binary singular integral equation in reproducing kernel space, and then obtain its approximate solution through the evaluation of exact solutions. Numerical examples will show the effectiveness of this method.

  4. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  5. The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels

    International Nuclear Information System (INIS)

    Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi

    2015-01-01

    The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)

  6. Interpolation of Missing Precipitation Data Using Kernel Estimations for Hydrologic Modeling

    Directory of Open Access Journals (Sweden)

    Hyojin Lee

    2015-01-01

    Full Text Available Precipitation is the main factor that drives hydrologic modeling; therefore, missing precipitation data can cause malfunctions in hydrologic modeling. Although interpolation of missing precipitation data is recognized as an important research topic, only a few methods follow a regression approach. In this study, daily precipitation data were interpolated using five different kernel functions, namely, Epanechnikov, Quartic, Triweight, Tricube, and Cosine, to estimate missing precipitation data. This study also presents an assessment that compares estimation of missing precipitation data through Kth nearest neighborhood (KNN regression to the five different kernel estimations and their performance in simulating streamflow using the Soil Water Assessment Tool (SWAT hydrologic model. The results show that the kernel approaches provide higher quality interpolation of precipitation data compared with the KNN regression approach, in terms of both statistical data assessment and hydrologic modeling performance.

  7. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  8. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  9. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  10. Application of Image Texture Analysis for Evaluation of X-Ray Images of Fungal-Infected Maize Kernels

    DEFF Research Database (Denmark)

    Orina, Irene; Manley, Marena; Kucheryavskiy, Sergey V.

    2018-01-01

    The feasibility of image texture analysis to evaluate X-ray images of fungal-infected maize kernels was investigated. X-ray images of maize kernels infected with Fusarium verticillioides and control kernels were acquired using high-resolution X-ray micro-computed tomography. After image acquisition...... developed using partial least squares discriminant analysis (PLS-DA), and accuracies of 67 and 73% were achieved using first-order statistical features and GLCM extracted features, respectively. This work provides information on the possible application of image texture as method for analysing X-ray images......., homogeneity and contrast) were extracted from the side, front and top views of each kernel and used as inputs for principal component analysis (PCA). The first-order statistical image features gave a better separation of the control from infected kernels on day 8 post-inoculation. Classification models were...

  11. A Wavelet Kernel-Based Primal Twin Support Vector Machine for Economic Development Prediction

    Directory of Open Access Journals (Sweden)

    Fang Su

    2013-01-01

    Full Text Available Economic development forecasting allows planners to choose the right strategies for the future. This study is to propose economic development prediction method based on the wavelet kernel-based primal twin support vector machine algorithm. As gross domestic product (GDP is an important indicator to measure economic development, economic development prediction means GDP prediction in this study. The wavelet kernel-based primal twin support vector machine algorithm can solve two smaller sized quadratic programming problems instead of solving a large one as in the traditional support vector machine algorithm. Economic development data of Anhui province from 1992 to 2009 are used to study the prediction performance of the wavelet kernel-based primal twin support vector machine algorithm. The comparison of mean error of economic development prediction between wavelet kernel-based primal twin support vector machine and traditional support vector machine models trained by the training samples with the 3–5 dimensional input vectors, respectively, is given in this paper. The testing results show that the economic development prediction accuracy of the wavelet kernel-based primal twin support vector machine model is better than that of traditional support vector machine.

  12. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  13. Comparisons of geoid models over Alaska computed with different Stokes' kernel modifications

    Science.gov (United States)

    Li, X.; Wang, Y.

    2011-01-01

    Various Stokes kernel modification methods have been developed over the years. The goal of this paper is to test the most commonly used Stokes kernel modifications numerically by using Alaska as a test area and EGM08 as a reference model. The tests show that some methods are more sensitive than others to the integration cap sizes. For instance, using the methods of Vaníček and Kleusberg or Featherstone et al. with kernel modification at degree 60, the geoid decreases by 30 cm (on average) when the cap size increases from 1° to 25°. The corresponding changes in the methods of Wong and Gore and Heck and Grüninger are only at the 1 cm level. At high modification degrees, above 360, the methods of Vaníček and Kleusberg and Featherstone et al become unstable because of numerical problems in the modification coefficients; similar conclusions have been reported by Featherstone (2003). In contrast, the methods of Wong and Gore, Heck and Grüninger and the least-squares spectral combination are stable at any modification degree, though they do not provide as good fit as the best case of the Molodenskii-type methods at the GPS/Leveling benchmarks. However, certain tests for choosing the cap size and modification degree have to be performed in advance to avoid abrupt mean geoid changes if the latter methods are applied.

  14. Microscopic description of the collisions between nuclei. [Generator coordinate kernels

    Energy Technology Data Exchange (ETDEWEB)

    Canto, L F; Brink, D M [Oxford Univ. (UK). Dept. of Theoretical Physics

    1977-03-21

    The equivalence of the generator coordinate method and the resonating group method is used in the derivation of two new methods to describe the scattering of spin-zero fragments. Both these methods use generator coordinate kernels, but avoid the problem of calculating the generator coordinate weight function in the asymptotic region. The scattering of two ..cap alpha..-particles is studied as an illustration.

  15. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  16. A class of kernel based real-time elastography algorithms.

    Science.gov (United States)

    Kibria, Md Golam; Hasan, Md Kamrul

    2015-08-01

    In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Reproducible methods for experimental infection with Flavobacterium psychrophilum in rainbow trout Oncorhynchus mykiss

    DEFF Research Database (Denmark)

    Madsen, Lone; Dalsgaard, Inger

    1999-01-01

    , and this method was tested using isolates with different elastin- degrading profiles and representing different serotypes. Injecting trout, average weight 1 g, with 10(4) CFU (colony- forming units) per fish caused cumulative mortalities around 60 to 70%. The virulent strains belonged to certain serotypes...... and degraded elastin. The intraperitoneal injection challenge method could be used on larger fish, but the infection dose was 10(7) CFU per fish before mortalities occurred. Bath infection and bath infection in combination with formalin treatment (stress) seemed to be reproducible methods that could be used...

  18. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  19. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  20. Analysis of total hydrogen content in palm oil and palm kernel oil using thermal neutron moderation method

    International Nuclear Information System (INIS)

    Akaho, E.H.K.; Dagadu, C.P.K.; Maaku, B.T.; Anim-Sampong, S.; Kyere, A.W.K.; Jonah, S.A.

    2001-01-01

    A fast and non-destructive technique based on thermal neutron moderation has been used for determining the total hydrogen content in two types of red palm oil (dzomi and amidze) and palm kernel oil produced by traditional methods in Ghana. An equipment consisting of an 241 Am-Be neutron source and 3 He neutron detector was used in the investigation. The equipment was originally designed for detection of liquid levels in petrochemical and other process industries. Standards in the form of liquid hydrocarbons were used to obtain calibration lines for thermal neutron reflection parameter as a function of hydrogen content. Measured reflection parameters with respective hydrogen content with or without heat treatment of the three edible palm oils available on the market were compared with a brand cooking oil (frytol). The average total hydrogen content in the local oil samples prior to heating was measured to be 11.62 w% which compared well with acceptable value of 12 w% for palm oils in the sub-region. After heat treatment, the frytol oil (produced through bleaching process) had the least loss of hydrogen content of 0.26% in comparison with palm kernel oil of 0.44% followed by dzomi of 1.96% and by amidze of 3.22%. (author)

  1. Linked-cluster formulation of electron-hole interaction kernel in real-space representation without using unoccupied states.

    Science.gov (United States)

    Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam

    2018-05-21

    were found to be in good agreement with the EOM-CCSD and GW+BSE methods. The numerical results highlight the effectiveness of the developed method for overcoming the computational barrier of accurately determining the electron-hole interaction kernel to applications of large finite systems such as quantum dots and nanorods.

  2. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    Science.gov (United States)

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  3. Kernel-based noise filtering of neutron detector signals

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki

    2007-01-01

    This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests

  4. Non-coding RNA detection methods combined to improve usability, reproducibility and precision

    Directory of Open Access Journals (Sweden)

    Kreikemeyer Bernd

    2010-09-01

    Full Text Available Abstract Background Non-coding RNAs gain more attention as their diverse roles in many cellular processes are discovered. At the same time, the need for efficient computational prediction of ncRNAs increases with the pace of sequencing technology. Existing tools are based on various approaches and techniques, but none of them provides a reliable ncRNA detector yet. Consequently, a natural approach is to combine existing tools. Due to a lack of standard input and output formats combination and comparison of existing tools is difficult. Also, for genomic scans they often need to be incorporated in detection workflows using custom scripts, which decreases transparency and reproducibility. Results We developed a Java-based framework to integrate existing tools and methods for ncRNA detection. This framework enables users to construct transparent detection workflows and to combine and compare different methods efficiently. We demonstrate the effectiveness of combining detection methods in case studies with the small genomes of Escherichia coli, Listeria monocytogenes and Streptococcus pyogenes. With the combined method, we gained 10% to 20% precision for sensitivities from 30% to 80%. Further, we investigated Streptococcus pyogenes for novel ncRNAs. Using multiple methods--integrated by our framework--we determined four highly probable candidates. We verified all four candidates experimentally using RT-PCR. Conclusions We have created an extensible framework for practical, transparent and reproducible combination and comparison of ncRNA detection methods. We have proven the effectiveness of this approach in tests and by guiding experiments to find new ncRNAs. The software is freely available under the GNU General Public License (GPL, version 3 at http://www.sbi.uni-rostock.de/moses along with source code, screen shots, examples and tutorial material.

  5. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  6. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  7. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  8. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  9. Formal truncations of connected kernel equations

    International Nuclear Information System (INIS)

    Dixon, R.M.

    1977-01-01

    The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems

  10. Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating

    Science.gov (United States)

    Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen

    2012-01-01

    This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…

  11. Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray

    Directory of Open Access Journals (Sweden)

    Lan Shu

    2008-07-01

    Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLE’s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.

  12. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  13. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  14. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  15. A network-based method to evaluate quality of reproducibility of differential expression in cancer genomics studies.

    Science.gov (United States)

    Li, Robin; Lin, Xiao; Geng, Haijiang; Li, Zhihui; Li, Jiabing; Lu, Tao; Yan, Fangrong

    2015-12-29

    Personalized cancer treatments depend on the determination of a patient's genetic status according to known genetic profiles for which targeted treatments exist. Such genetic profiles must be scientifically validated before they is applied to general patient population. Reproducibility of findings that support such genetic profiles is a fundamental challenge in validation studies. The percentage of overlapping genes (POG) criterion and derivative methods produce unstable and misleading results. Furthermore, in a complex disease, comparisons between different tumor subtypes can produce high POG scores that do not capture the consistencies in the functions. We focused on the quality rather than the quantity of the overlapping genes. We defined the rank value of each gene according to importance or quality by PageRank on basis of a particular topological structure. Then, we used the p-value of the rank-sum of the overlapping genes (PRSOG) to evaluate the quality of reproducibility. Though the POG scores were low in different studies of the same disease, the PRSOG was statistically significant, which suggests that sets of differentially expressed genes might be highly reproducible. Evaluations of eight datasets from breast cancer, lung cancer and four other disorders indicate that quality-based PRSOG method performs better than a quantity-based method. Our analysis of the components of the sets of overlapping genes supports the utility of the PRSOG method.

  16. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  17. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  18. A Generalized Pyramid Matching Kernel for Human Action Recognition in Realistic Videos

    Directory of Open Access Journals (Sweden)

    Wenjun Zhang

    2013-10-01

    Full Text Available Human action recognition is an increasingly important research topic in the fields of video sensing, analysis and understanding. Caused by unconstrained sensing conditions, there exist large intra-class variations and inter-class ambiguities in realistic videos, which hinder the improvement of recognition performance for recent vision-based action recognition systems. In this paper, we propose a generalized pyramid matching kernel (GPMK for recognizing human actions in realistic videos, based on a multi-channel “bag of words” representation constructed from local spatial-temporal features of video clips. As an extension to the spatial-temporal pyramid matching (STPM kernel, the GPMK leverages heterogeneous visual cues in multiple feature descriptor types and spatial-temporal grid granularity levels, to build a valid similarity metric between two video clips for kernel-based classification. Instead of the predefined and fixed weights used in STPM, we present a simple, yet effective, method to compute adaptive channel weights of GPMK based on the kernel target alignment from training data. It incorporates prior knowledge and the data-driven information of different channels in a principled way. The experimental results on three challenging video datasets (i.e., Hollywood2, Youtube and HMDB51 validate the superiority of our GPMK w.r.t. the traditional STPM kernel for realistic human action recognition and outperform the state-of-the-art results in the literature.

  19. The finite section method and problems in frame theory

    DEFF Research Database (Denmark)

    Christensen, Ole; Strohmer, T.

    2005-01-01

    solves related computational problems in frame theory. In the case of a frame which is localized w.r.t. an orthonormal basis we are able to estimate the rate of approximation. The results are applied to the reproducing kernel frame appearing in the theory for shift-invariant spaces generated by a Riesz......The finite section method is a convenient tool for approximation of the inverse of certain operators using finite-dimensional matrix techniques. In this paper we demonstrate that the method is very useful in frame theory: it leads to an efficient approximation of the inverse frame operator and also...

  20. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  1. Hippocampal volume change measurement: quantitative assessment of the reproducibility of expert manual outlining and the automated methods FreeSurfer and FIRST.

    Science.gov (United States)

    Mulder, Emma R; de Jong, Remko A; Knol, Dirk L; van Schijndel, Ronald A; Cover, Keith S; Visser, Pieter J; Barkhof, Frederik; Vrenken, Hugo

    2014-05-15

    To measure hippocampal volume change in Alzheimer's disease (AD) or mild cognitive impairment (MCI), expert manual delineation is often used because of its supposed accuracy. It has been suggested that expert outlining yields poorer reproducibility as compared to automated methods, but this has not been investigated. To determine the reproducibilities of expert manual outlining and two common automated methods for measuring hippocampal atrophy rates in healthy aging, MCI and AD. From the Alzheimer's Disease Neuroimaging Initiative (ADNI), 80 subjects were selected: 20 patients with AD, 40 patients with mild cognitive impairment (MCI) and 20 healthy controls (HCs). Left and right hippocampal volume change between baseline and month-12 visit was assessed by using expert manual delineation, and by the automated software packages FreeSurfer (longitudinal processing stream) and FIRST. To assess reproducibility of the measured hippocampal volume change, both back-to-back (BTB) MPRAGE scans available for each visit were analyzed. Hippocampal volume change was expressed in μL, and as a percentage of baseline volume. Reproducibility of the 1-year hippocampal volume change was estimated from the BTB measurements by using linear mixed model to calculate the limits of agreement (LoA) of each method, reflecting its measurement uncertainty. Using the delta method, approximate p-values were calculated for the pairwise comparisons between methods. Statistical analyses were performed both with inclusion and exclusion of visibly incorrect segmentations. Visibly incorrect automated segmentation in either one or both scans of a longitudinal scan pair occurred in 7.5% of the hippocampi for FreeSurfer and in 6.9% of the hippocampi for FIRST. After excluding these failed cases, reproducibility analysis for 1-year percentage volume change yielded LoA of ±7.2% for FreeSurfer, ±9.7% for expert manual delineation, and ±10.0% for FIRST. Methods ranked the same for reproducibility of 1

  2. Sample selection based on kernel-subclustering for the signal reconstruction of multifunctional sensors

    International Nuclear Information System (INIS)

    Wang, Xin; Wei, Guo; Sun, Jinwei

    2013-01-01

    The signal reconstruction methods based on inverse modeling for the signal reconstruction of multifunctional sensors have been widely studied in recent years. To improve the accuracy, the reconstruction methods have become more and more complicated because of the increase in the model parameters and sample points. However, there is another factor that affects the reconstruction accuracy, the position of the sample points, which has not been studied. A reasonable selection of the sample points could improve the signal reconstruction quality in at least two ways: improved accuracy with the same number of sample points or the same accuracy obtained with a smaller number of sample points. Both ways are valuable for improving the accuracy and decreasing the workload, especially for large batches of multifunctional sensors. In this paper, we propose a sample selection method based on kernel-subclustering distill groupings of the sample data and produce the representation of the data set for inverse modeling. The method calculates the distance between two data points based on the kernel-induced distance instead of the conventional distance. The kernel function is a generalization of the distance metric by mapping the data that are non-separable in the original space into homogeneous groups in the high-dimensional space. The method obtained the best results compared with the other three methods in the simulation. (paper)

  3. The kernel G1(x,x') and the quantum equivalence principle

    International Nuclear Information System (INIS)

    Ceccatto, H.; Foussats, A.; Giacomini, H.; Zandron, O.

    1981-01-01

    In this paper, it is re-examined the formulation of the quantum equivalence principle (QEP) and its compatibility with the conditions which must be fulfilled by the kernel G 1 (x,x') is discussed. It is also determined the base of solutions which give the particle model in a curved space-time in terms of Cauchy's data for such a kernel. Finally, it is analyzed the creation of particles in this model by studying the time evolution of creation and annihilation operators. This method is an alternative to one that uses Bogoliubov's transformation as a mechanism of creation. (author)

  4. Reaction kinetics aspect of U3O8 kernel with gas H2 on the characteristics of activation energy, reaction rate constant and O/U ratio of UO2 kernel

    International Nuclear Information System (INIS)

    Damunir

    2007-01-01

    The reaction kinetics aspect of U 3 O 8 kernel with gas H 2 on the characteristics of activation energy, reaction rate constant and O/U ratio of UO 2 kernel had been studied. U 3 O 8 kernel was reacted with gas H 2 in a reduction furnace at varied reaction time and temperature. The reaction temperature was varied at 600, 700, 750 and 850 °C with a pressure of 50 mmHg for 3 hours in gas N 2 atmosphere. The reation time was varied at 1, 2, 3 and 4 hours at a temperature of 750 °C using similar conditions. The reaction product was UO 2 kernel. The reaction kinetic aspect between U 3 O 8 and gas H 2 comprised the minimum activation energy (ΔE), the reaction rate constant and the O/U ratio of UO 2 kernel. The minimum activation energy was determined from a straight line slope of equation ln [{D b . R o {(1 - (1 - X b ) ⅓ } / (b.t.Cg)] = -3.9406 x 10 3 / T + 4.044. By multiplying with the straight line slope -3.9406 x 10 3 , the ideal gas constant (R) 1.985 cal/mol and the molarity difference of reaction coefficient 2, a minimum activation energy of 15.644 kcal/mol was obtained. The reaction rate constant was determined from first-order chemical reaction control and Arrhenius equation. The O/U ratio of UO 2 kernel was obtained using gravimetric method. The analysis result of reaction rate constant with chemical reaction control equation yielded reaction rate constants of 0.745 - 1.671 s -1 and the Arrhenius equation at temperatures of 650 - 850 °C yielded reaction rate constants of 0.637 - 2.914 s -1 . The O/U ratios of UO 2 kernel at the respective reaction rate constants were 2.013 - 2.014 and the O/U ratios at reaction time 1 - 4 hours were 2.04 - 2.011. The experiment results indicated that the minimum activation energy influenced the rate constant of first-order reaction and the O/U ratio of UO 2 kernel. The optimum condition was obtained at reaction rate constant of 1.43 s -1 , O/U ratio of UO 2 kernel of 2.01 at temperature of 750 °C and reaction time of 3

  5. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    International Nuclear Information System (INIS)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays

  6. A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram.

    Science.gov (United States)

    Wu, Chung Kit; Tsang, Kim Fung; Chi, Hao Ran; Hung, Faan Hei

    2016-05-09

    Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD) systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG) is a proven biosignal that accurately and simultaneously reflects human's biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD) using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.

  7. A Precise Drunk Driving Detection Using Weighted Kernel Based on Electrocardiogram

    Directory of Open Access Journals (Sweden)

    Chung Kit Wu

    2016-05-01

    Full Text Available Globally, 1.2 million people die and 50 million people are injured annually due to traffic accidents. These traffic accidents cost $500 billion dollars. Drunk drivers are found in 40% of the traffic crashes. Existing drunk driving detection (DDD systems do not provide accurate detection and pre-warning concurrently. Electrocardiogram (ECG is a proven biosignal that accurately and simultaneously reflects human’s biological status. In this letter, a classifier for DDD based on ECG is investigated in an attempt to reduce traffic accidents caused by drunk drivers. At this point, it appears that there is no known research or literature found on ECG classifier for DDD. To identify drunk syndromes, the ECG signals from drunk drivers are studied and analyzed. As such, a precise ECG-based DDD (ECG-DDD using a weighted kernel is developed. From the measurements, 10 key features of ECG signals were identified. To incorporate the important features, the feature vectors are weighted in the customization of kernel functions. Four commonly adopted kernel functions are studied. Results reveal that weighted feature vectors improve the accuracy by 11% compared to the computation using the prime kernel. Evaluation shows that ECG-DDD improved the accuracy by 8% to 18% compared to prevailing methods.

  8. Finite Gaussian Mixture Approximations to Analytically Intractable Density Kernels

    DEFF Research Database (Denmark)

    Khorunzhina, Natalia; Richard, Jean-Francois

    The objective of the paper is that of constructing finite Gaussian mixture approximations to analytically intractable density kernels. The proposed method is adaptive in that terms are added one at the time and the mixture is fully re-optimized at each step using a distance measure that approxima...

  9. Magnetic resonance imaging of single rice kernels during cooking

    NARCIS (Netherlands)

    Mohoric, A.; Vergeldt, F.J.; Gerkema, E.; Jager, de P.A.; Duynhoven, van J.P.M.; Dalen, van G.; As, van H.

    2004-01-01

    The RARE imaging method was used to monitor the cooking of single rice kernels in real time and with high spatial resolution in three dimensions. The imaging sequence is optimized for rapid acquisition of signals with short relaxation times using centered out RARE. Short scan time and high spatial

  10. DuSK: A Dual Structure-preserving Kernel for Supervised Tensor Learning with Applications to Neuroimages.

    Science.gov (United States)

    He, Lifang; Kong, Xiangnan; Yu, Philip S; Ragin, Ann B; Hao, Zhifeng; Yang, Xiaowei

    With advances in data collection technologies, tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community. Conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices, however structural information within the tensors will be lost. In this paper, we introduce a new scheme to design structure-preserving kernels for supervised tensor learning. Specifically, we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel. We proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure. Theoretically, our approach is an extension of the conventional kernels in the vector space to tensor space. We applied our novel kernel in conjunction with SVM to real-world tensor classification problems including brain fMRI classification for three different diseases ( i.e ., Alzheimer's disease, ADHD and brain damage by HIV). Extensive empirical studies demonstrate that our proposed approach can effectively boost tensor classification performances, particularly with small sample sizes.

  11. Spent Fuel Pool Dose Rate Calculations Using Point Kernel and Hybrid Deterministic-Stochastic Shielding Methods

    International Nuclear Information System (INIS)

    Matijevic, M.; Grgic, D.; Jecmenica, R.

    2016-01-01

    This paper presents comparison of the Krsko Power Plant simplified Spent Fuel Pool (SFP) dose rates using different computational shielding methodologies. The analysis was performed to estimate limiting gamma dose rates on wall mounted level instrumentation in case of significant loss of cooling water. The SFP was represented with simple homogenized cylinders (point kernel and Monte Carlo (MC)) or cuboids (MC) using uranium, iron, water, and dry-air as bulk region materials. The pool is divided on the old and new section where the old one has three additional subsections representing fuel assemblies (FAs) with different burnup/cooling time (60 days, 1 year and 5 years). The new section represents the FAs with the cooling time of 10 years. The time dependent fuel assembly isotopic composition was calculated using ORIGEN2 code applied to the depletion of one of the fuel assemblies present in the pool (AC-29). The source used in Microshield calculation is based on imported isotopic activities. The time dependent photon spectra with total source intensity from Microshield multigroup point kernel calculations was then prepared for two hybrid deterministic-stochastic sequences. One is based on SCALE/MAVRIC (Monaco and Denovo) methodology and another uses Monte Carlo code MCNP6.1.1b and ADVANTG3.0.1. code. Even though this model is a fairly simple one, the layers of shielding materials are thick enough to pose a significant shielding problem for MC method without the use of effective variance reduction (VR) technique. For that purpose the ADVANTG code was used to generate VR parameters (SB cards in SDEF and WWINP file) for MCNP fixed-source calculation using continuous energy transport. ADVATNG employs a deterministic forward-adjoint transport solver Denovo which implements CADIS/FW-CADIS methodology. Denovo implements a structured, Cartesian-grid SN solver based on the Koch-Baker-Alcouffe parallel transport sweep algorithm across x-y domain blocks. This was first

  12. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  13. Fixed kernel regression for voltammogram feature extraction

    International Nuclear Information System (INIS)

    Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N

    2009-01-01

    Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals

  14. Intelligent Design of Metal Oxide Gas Sensor Arrays Using Reciprocal Kernel Support Vector Regression

    Science.gov (United States)

    Dougherty, Andrew W.

    Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor

  15. On Convergence of Kernel Density Estimates in Particle Filtering

    Czech Academy of Sciences Publication Activity Database

    Coufal, David

    2016-01-01

    Roč. 52, č. 5 (2016), s. 735-756 ISSN 0023-5954 Grant - others:GA ČR(CZ) GA16-03708S; SVV(CZ) 260334/2016 Institutional support: RVO:67985807 Keywords : Fourier analysis * kernel methods * particle filter Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.379, year: 2016

  16. A new kernel discriminant analysis framework for electronic nose recognition

    International Nuclear Information System (INIS)

    Zhang, Lei; Tian, Feng-Chun

    2014-01-01

    Graphical abstract: - Highlights: • This paper proposes a new discriminant analysis framework for feature extraction and recognition. • The principle of the proposed NDA is derived mathematically. • The NDA framework is coupled with kernel PCA for classification. • The proposed KNDA is compared with state of the art e-Nose recognition methods. • The proposed KNDA shows the best performance in e-Nose experiments. - Abstract: Electronic nose (e-Nose) technology based on metal oxide semiconductor gas sensor array is widely studied for detection of gas components. This paper proposes a new discriminant analysis framework (NDA) for dimension reduction and e-Nose recognition. In a NDA, the between-class and the within-class Laplacian scatter matrix are designed from sample to sample, respectively, to characterize the between-class separability and the within-class compactness by seeking for discriminant matrix to simultaneously maximize the between-class Laplacian scatter and minimize the within-class Laplacian scatter. In terms of the linear separability in high dimensional kernel mapping space and the dimension reduction of principal component analysis (PCA), an effective kernel PCA plus NDA method (KNDA) is proposed for rapid detection of gas mixture components by an e-Nose. The NDA framework is derived in this paper as well as the specific implementations of the proposed KNDA method in training and recognition process. The KNDA is examined on the e-Nose datasets of six kinds of gas components, and compared with state of the art e-Nose classification methods. Experimental results demonstrate that the proposed KNDA method shows the best performance with average recognition rate and total recognition rate as 94.14% and 95.06% which leads to a promising feature extraction and multi-class recognition in e-Nose

  17. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov (United States)

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  18. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn by Measurement of Oil Content.

    Directory of Open Access Journals (Sweden)

    Hongzhi Wang

    Full Text Available One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed.

  19. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn) by Measurement of Oil Content

    Science.gov (United States)

    Xu, Xiaoping; Huang, Qingming; Chen, Shanshan; Yang, Peiqiang; Chen, Shaojiang; Song, Yiqiao

    2016-01-01

    One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed. PMID:27454427

  20. Investigation of tilted dose kernels for portal dose prediction in a-Si electronic portal imagers

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2006-01-01

    The effect of beam divergence on dose calculation via Monte Carlo generated dose kernels was investigated in an amorphous silicon electronic portal imaging device (EPID). The flat-panel detector was simulated in EGSnrc with an additional 3.0 cm water buildup. The model included details of the detector's imaging cassette and the front cover upstream of it. To approximate the effect of the EPID's rear housing, a 2.1 cm air gap and 1.0 cm water slab were introduced into the simulation as equivalent backscatter material. Dose kernels were generated with an incident pencil beam of monoenergetic photons of energy 0.1, 2, 6, and 18 MeV. The orientation of the incident pencil beam was varied from 0 deg. to 14 deg. in 2 deg. increments. Dose was scored in the phosphor layer of the detector in both cylindrical (at 0 deg. ) and Cartesian (at 0 deg. -14 deg.) geometries. To reduce statistical fluctuations in the Cartesian geometry simulations at large radial distances from the incident pencil beam, the voxels were first averaged bilaterally about the pencil beam and then combined into concentric square rings of voxels. Profiles of the EPID dose kernels displayed increasing asymmetry with increasing angle and energy. A comparison of the superposition (tilted kernels) and convolution (parallel kernels) dose calculation methods via the χ-comparison test (a derivative of the γ-evaluation) in worst-case-scenario geometries demonstrated an agreement between the two methods within 0.0784 cm (one pixel width) distance-to-agreement and up to a 1.8% dose difference. More clinically typical field sizes and source-to-detector distances were also tested, yielding at most a 1.0% dose difference and the same distance-to-agreement. Therefore, the assumption of parallel dose kernels has less than a 1.8% dosimetric effect in extreme cases and less than a 1.0% dosimetric effect in most clinically relevant situations and should be suitable for most clinical dosimetric applications. The

  1. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  2. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  3. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  4. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  5. Feature selection and multi-kernel learning for sparse representation on a manifold

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao etal. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. © 2013 Elsevier Ltd.

  6. Feature selection and multi-kernel learning for sparse representation on a manifold.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2014-03-01

    Sparse representation has been widely studied as a part-based data representation method and applied in many scientific and engineering fields, such as bioinformatics and medical imaging. It seeks to represent a data sample as a sparse linear combination of some basic items in a dictionary. Gao et al. (2013) recently proposed Laplacian sparse coding by regularizing the sparse codes with an affinity graph. However, due to the noisy features and nonlinear distribution of the data samples, the affinity graph constructed directly from the original feature space is not necessarily a reliable reflection of the intrinsic manifold of the data samples. To overcome this problem, we integrate feature selection and multiple kernel learning into the sparse coding on the manifold. To this end, unified objectives are defined for feature selection, multiple kernel learning, sparse coding, and graph regularization. By optimizing the objective functions iteratively, we develop novel data representation algorithms with feature selection and multiple kernel learning respectively. Experimental results on two challenging tasks, N-linked glycosylation prediction and mammogram retrieval, demonstrate that the proposed algorithms outperform the traditional sparse coding methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  8. Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee

    2011-01-01

    The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application

  9. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  10. Visualization of nonlinear kernel models in neuroimaging by sensitivity maps

    DEFF Research Database (Denmark)

    Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard

    2011-01-01

    There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus on v...

  11. Extraction of Phrase-Structure Fragments with a Linear Average Time Tree-Kernel

    NARCIS (Netherlands)

    van Cranenburgh, Andreas

    2014-01-01

    We present an algorithm and implementation for extracting recurring fragments from treebanks. Using a tree-kernel method the largest common fragments are extracted from each pair of trees. The algorithm presented achieves a thirty-fold speedup over the previously available method on the Wall Street

  12. The dipole form of the gluon part of the BFKL kernel

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Grabovsky, A.V.; Papa, A.

    2007-01-01

    The dipole form of the gluon part of the color singlet BFKL kernel in the next-to-leading order (NLO) is obtained in the coordinate representation by direct transfer from the momentum representation, where the kernel was calculated before. With this paper the transformation of the NLO BFKL kernel to the dipole form, started a few months ago with the quark part of the kernel, is completed

  13. Emotion Recognition from Single-Trial EEG Based on Kernel Fisher’s Emotion Pattern and Imbalanced Quasiconformal Kernel Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-07-01

    Full Text Available Electroencephalogram-based emotion recognition (EEG-ER has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI. However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher’s discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher’s emotion pattern (KFEP, and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68% and arousal (84.79% among all testing methods.

  14. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Universal Algorithm for Online Trading Based on the Method of Calibration

    OpenAIRE

    V'yugin, Vladimir; Trunov, Vladimir

    2012-01-01

    We present a universal algorithm for online trading in Stock Market which performs asymptotically at least as good as any stationary trading strategy that computes the investment at each step using a fixed function of the side information that belongs to a given RKHS (Reproducing Kernel Hilbert Space). Using a universal kernel, we extend this result for any continuous stationary strategy. In this learning process, a trader rationally chooses his gambles using predictions made by a randomized ...

  16. Automatic classification of retinal three-dimensional optical coherence tomography images using principal component analysis network with composite kernels.

    Science.gov (United States)

    Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein

    2017-11-01

    We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  17. Robust anti-synchronization of uncertain chaotic systems based on multiple-kernel least squares support vector machine modeling

    International Nuclear Information System (INIS)

    Chen Qiang; Ren Xuemei; Na Jing

    2011-01-01

    Highlights: Model uncertainty of the system is approximated by multiple-kernel LSSVM. Approximation errors and disturbances are compensated in the controller design. Asymptotical anti-synchronization is achieved with model uncertainty and disturbances. Abstract: In this paper, we propose a robust anti-synchronization scheme based on multiple-kernel least squares support vector machine (MK-LSSVM) modeling for two uncertain chaotic systems. The multiple-kernel regression, which is a linear combination of basic kernels, is designed to approximate system uncertainties by constructing a multiple-kernel Lagrangian function and computing the corresponding regression parameters. Then, a robust feedback control based on MK-LSSVM modeling is presented and an improved update law is employed to estimate the unknown bound of the approximation error. The proposed control scheme can guarantee the asymptotic convergence of the anti-synchronization errors in the presence of system uncertainties and external disturbances. Numerical examples are provided to show the effectiveness of the proposed method.

  18. A new discrete dipole kernel for quantitative susceptibility mapping.

    Science.gov (United States)

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    Directory of Open Access Journals (Sweden)

    Zhengbin Liu

    2016-08-01

    Full Text Available Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis. In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits.

  20. Scientific opinion on the acute health risks related to the presence of cyanogenic glycosides in raw apricot kernels and products derived from raw apricot kernels

    DEFF Research Database (Denmark)

    Petersen, Annette

    of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...

  1. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.

    Science.gov (United States)

    Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-06-07

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.

  2. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    Directory of Open Access Journals (Sweden)

    Massaine Bandeira e Sousa

    2017-06-01

    Full Text Available Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1 single-environment, main genotypic effect model (SM; (2 multi-environment, main genotypic effects model (MM; (3 multi-environment, single variance G×E deviation model (MDs; and (4 multi-environment, environment-specific variance G×E deviation model (MDe. Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB, and a nonlinear kernel Gaussian kernel (GK. The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets, having different numbers of maize hybrids evaluated in different environments for grain yield (GY, plant height (PH, and ear height (EH. Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied.

  3. DuSK: A Dual Structure-preserving Kernel for Supervised Tensor Learning with Applications to Neuroimages

    Science.gov (United States)

    He, Lifang; Kong, Xiangnan; Yu, Philip S.; Ragin, Ann B.; Hao, Zhifeng; Yang, Xiaowei

    2015-01-01

    With advances in data collection technologies, tensor data is assuming increasing prominence in many applications and the problem of supervised tensor learning has emerged as a topic of critical significance in the data mining and machine learning community. Conventional methods for supervised tensor learning mainly focus on learning kernels by flattening the tensor into vectors or matrices, however structural information within the tensors will be lost. In this paper, we introduce a new scheme to design structure-preserving kernels for supervised tensor learning. Specifically, we demonstrate how to leverage the naturally available structure within the tensorial representation to encode prior knowledge in the kernel. We proposed a tensor kernel that can preserve tensor structures based upon dual-tensorial mapping. The dual-tensorial mapping function can map each tensor instance in the input space to another tensor in the feature space while preserving the tensorial structure. Theoretically, our approach is an extension of the conventional kernels in the vector space to tensor space. We applied our novel kernel in conjunction with SVM to real-world tensor classification problems including brain fMRI classification for three different diseases (i.e., Alzheimer's disease, ADHD and brain damage by HIV). Extensive empirical studies demonstrate that our proposed approach can effectively boost tensor classification performances, particularly with small sample sizes. PMID:25927014

  4. Efficient Kernel-Based Ensemble Gaussian Mixture Filtering

    KAUST Repository

    Liu, Bo

    2015-11-11

    We consider the Bayesian filtering problem for data assimilation following the kernel-based ensemble Gaussian-mixture filtering (EnGMF) approach introduced by Anderson and Anderson (1999). In this approach, the posterior distribution of the system state is propagated with the model using the ensemble Monte Carlo method, providing a forecast ensemble that is then used to construct a prior Gaussian-mixture (GM) based on the kernel density estimator. This results in two update steps: a Kalman filter (KF)-like update of the ensemble members and a particle filter (PF)-like update of the weights, followed by a resampling step to start a new forecast cycle. After formulating EnGMF for any observational operator, we analyze the influence of the bandwidth parameter of the kernel function on the covariance of the posterior distribution. We then focus on two aspects: i) the efficient implementation of EnGMF with (relatively) small ensembles, where we propose a new deterministic resampling strategy preserving the first two moments of the posterior GM to limit the sampling error; and ii) the analysis of the effect of the bandwidth parameter on contributions of KF and PF updates and on the weights variance. Numerical results using the Lorenz-96 model are presented to assess the behavior of EnGMF with deterministic resampling, study its sensitivity to different parameters and settings, and evaluate its performance against ensemble KFs. The proposed EnGMF approach with deterministic resampling suggests improved estimates in all tested scenarios, and is shown to require less localization and to be less sensitive to the choice of filtering parameters.

  5. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    Science.gov (United States)

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  6. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  7. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  8. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    International Nuclear Information System (INIS)

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  9. Early Detection of Aspergillus parasiticus Infection in Maize Kernels Using Near-Infrared Hyperspectral Imaging and Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Xin Zhao

    2017-01-01

    Full Text Available Fungi infection in maize kernels is a major concern worldwide due to its toxic metabolites such as mycotoxins, thus it is necessary to develop appropriate techniques for early detection of fungi infection in maize kernels. Thirty-six sterilised maize kernels were inoculated each day with Aspergillus parasiticus from one to seven days, and then seven groups (D1, D2, D3, D4, D5, D6, D7 were determined based on the incubated time. Another 36 sterilised kernels without inoculation with fungi were taken as control (DC. Hyperspectral images of all kernels were acquired within spectral range of 921–2529 nm. Background, labels and bad pixels were removed using principal component analysis (PCA and masking. Separability computation for discrimination of fungal contamination levels indicated that the model based on the data of the germ region of individual kernels performed more effectively than on that of the whole kernels. Moreover, samples with a two-day interval were separable. Thus, four groups, DC, D1–2 (the group consisted of D1 and D2, D3–4 (D3 and D4, and D5–7 (D5, D6, and D7, were defined for subsequent classification. Two separate sample sets were prepared to verify the influence on a classification model caused by germ orientation, that is, germ up and the mixture of germ up and down with 1:1. Two smooth preprocessing methods (Savitzky-Golay smoothing, moving average smoothing and three scatter-correction methods (normalization, standard normal variate, and multiple scatter correction were compared, according to the performance of the classification model built by support vector machines (SVM. The best model for kernels with germ up showed the promising results with accuracies of 97.92% and 91.67% for calibration and validation data set, respectively, while accuracies of the best model for samples of the mixed kernels were 95.83% and 84.38%. Moreover, five wavelengths (1145, 1408, 1935, 2103, and 2383 nm were selected as the key

  10. Determination of active ingredients in corn silk, leaf, and kernel by capillary electrophoresis with electrochemicaI detection.

    Science.gov (United States)

    Lin, Miao; Chu, Qing-Cui; Tian, Xiu-Hui; Ye, Jian-Nong

    2007-01-01

    Corn has been known for its accumulation of flavones and phenolic acids. However, many parts of corn, except kernel, have not drawn much attention. In this work, a method based on capillary zone electrophoresis with electrochemical detection has been used for the separation and determination of epicatechin, rutin, ascorbic acid (Vc), kaempferol, chlorogenic acid, and quercetin in corn silk, leaf, and kernel. The distribution comparison of the ingredients among silk, leaf, and kernel is discussed. Several important factors--including running buffer acidity, separation voltage, and working electrode potential--were evaluated to acquire the optimum analysis conditions. Under the optimum conditions, the analytes could be well separated within 19 min in a 40-mmol/L borate buffer (pH 9.2). The response was linear over three orders of magnitude with detection limits (S/N = 3) ranging from 4.97 x 10(-8) to 9.75 x 10(-8) g/mL. The method has been successfully applied for the analysis of corn silk, leaf, and kernel with satisfactory results.

  11. Optimizing the performance of streaming numerical kernels on the IBM Blue Gene/P PowerPC 450 processor

    KAUST Repository

    Malas, Tareq Majed Yasin

    2012-05-21

    Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution of partial differential equations, represents a challenge despite the regularity of memory access. Sophisticated optimization techniques are required to fully utilize the CPU. We propose a new method for constructing streaming numerical kernels using a high-level assembly synthesis and optimization framework. We describe an implementation of this method in Python targeting the IBM® Blue Gene®/P supercomputer\\'s PowerPC® 450 core. This paper details the high-level design, construction, simulation, verification, and analysis of these kernels utilizing a subset of the CPU\\'s instruction set. We demonstrate the effectiveness of our approach by implementing several three-dimensional stencil kernels over a variety of cached memory scenarios and analyzing the mechanically scheduled variants, including a 27-point stencil achieving a 1.7× speedup over the best previously published results. © The Author(s) 2012.

  12. Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation

    Directory of Open Access Journals (Sweden)

    Rui Sun

    2016-08-01

    Full Text Available Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods.

  13. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  14. Guidelines for Reproducibly Building and Simulating Systems Biology Models.

    Science.gov (United States)

    Medley, J Kyle; Goldberg, Arthur P; Karr, Jonathan R

    2016-10-01

    Reproducibility is the cornerstone of the scientific method. However, currently, many systems biology models cannot easily be reproduced. This paper presents methods that address this problem. We analyzed the recent Mycoplasma genitalium whole-cell (WC) model to determine the requirements for reproducible modeling. We determined that reproducible modeling requires both repeatable model building and repeatable simulation. New standards and simulation software tools are needed to enhance and verify the reproducibility of modeling. New standards are needed to explicitly document every data source and assumption, and new deterministic parallel simulation tools are needed to quickly simulate large, complex models. We anticipate that these new standards and software will enable researchers to reproducibly build and simulate more complex models, including WC models.

  15. X-ray photoelectron spectroscopic analysis of rice kernels and flours: Measurement of surface chemical composition.

    Science.gov (United States)

    Nawaz, Malik A; Gaiani, Claire; Fukai, Shu; Bhandari, Bhesh

    2016-12-01

    The objectives of this study were to evaluate the ability of X-ray photoelectron spectroscopy (XPS) to differentiate rice macromolecules and to calculate the surface composition of rice kernels and flours. The uncooked kernels and flours surface composition of the two selected rice varieties, Thadokkham-11 (TDK11) and Doongara (DG) demonstrated an over-expression of lipids and proteins and an under-expression of starch compared to the bulk composition. The results of the study showed that XPS was able to differentiate rice polysaccharides (mainly starch), proteins and lipids in uncooked rice kernels and flours. Nevertheless, it was unable to distinguish components in cooked rice samples possibly due to complex interactions between gelatinized starch, denatured proteins and lipids. High resolution imaging methods (Scanning Electron Microscopy and Confocal Laser Scanning Microscopy) were employed to obtain complementary information about the properties and location of starch, proteins and lipids in rice kernels and flours. Copyright © 2016. Published by Elsevier Ltd.

  16. Pure endmember extraction using robust kernel archetypoid analysis for hyperspectral imagery

    Science.gov (United States)

    Sun, Weiwei; Yang, Gang; Wu, Ke; Li, Weiyue; Zhang, Dianfa

    2017-09-01

    A robust kernel archetypoid analysis (RKADA) method is proposed to extract pure endmembers from hyperspectral imagery (HSI). The RKADA assumes that each pixel is a sparse linear mixture of all endmembers and each endmember corresponds to a real pixel in the image scene. First, it improves the re8gular archetypal analysis with a new binary sparse constraint, and the adoption of the kernel function constructs the principal convex hull in an infinite Hilbert space and enlarges the divergences between pairwise pixels. Second, the RKADA transfers the pure endmember extraction problem into an optimization problem by minimizing residual errors with the Huber loss function. The Huber loss function reduces the effects from big noises and outliers in the convergence procedure of RKADA and enhances the robustness of the optimization function. Third, the random kernel sinks for fast kernel matrix approximation and the two-stage algorithm for optimizing initial pure endmembers are utilized to improve its computational efficiency in realistic implementations of RKADA, respectively. The optimization equation of RKADA is solved by using the block coordinate descend scheme and the desired pure endmembers are finally obtained. Six state-of-the-art pure endmember extraction methods are employed to make comparisons with the RKADA on both synthetic and real Cuprite HSI datasets, including three geometrical algorithms vertex component analysis (VCA), alternative volume maximization (AVMAX) and orthogonal subspace projection (OSP), and three matrix factorization algorithms the preconditioning for successive projection algorithm (PreSPA), hierarchical clustering based on rank-two nonnegative matrix factorization (H2NMF) and self-dictionary multiple measurement vector (SDMMV). Experimental results show that the RKADA outperforms all the six methods in terms of spectral angle distance (SAD) and root-mean-square-error (RMSE). Moreover, the RKADA has short computational times in offline

  17. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  18. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2012-02-01

    Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

  19. Landslide Susceptibility Mapping Based on Particle Swarm Optimization of Multiple Kernel Relevance Vector Machines: Case of a Low Hill Area in Sichuan Province, China

    Directory of Open Access Journals (Sweden)

    Yongliang Lin

    2016-10-01

    Full Text Available In this paper, we propose a multiple kernel relevance vector machine (RVM method based on the adaptive cloud particle swarm optimization (PSO algorithm to map landslide susceptibility in the low hill area of Sichuan Province, China. In the multi-kernel structure, the kernel selection problem can be solved by adjusting the kernel weight, which determines the single kernel contribution of the final kernel mapping. The weights and parameters of the multi-kernel function were optimized using the PSO algorithm. In addition, the convergence speed of the PSO algorithm was increased using cloud theory. To ensure the stability of the prediction model, the result of a five-fold cross-validation method was used as the fitness of the PSO algorithm. To verify the results, receiver operating characteristic curves (ROC and landslide dot density (LDD were used. The results show that the model that used a heterogeneous kernel (a combination of two different kernel functions had a larger area under the ROC curve (0.7616 and a lower prediction error ratio (0.28% than did the other types of kernel models employed in this study. In addition, both the sum of two high susceptibility zone LDDs (6.71/100 km2 and the sum of two low susceptibility zone LDDs (0.82/100 km2 demonstrated that the landslide susceptibility map based on the heterogeneous kernel model was closest to the historical landslide distribution. In conclusion, the results obtained in this study can provide very useful information for disaster prevention and land-use planning in the study area.

  20. Pixel-based meshfree modelling of skeletal muscles

    OpenAIRE

    Chen, Jiun-Shyan; Basava, Ramya Rao; Zhang, Yantao; Csapo, Robert; Malis, Vadim; Sinha, Usha; Hodgson, John; Sinha, Shantanu

    2015-01-01

    This paper introduces the meshfree Reproducing Kernel Particle Method (RKPM) for 3D image-based modeling of skeletal muscles. This approach allows for construction of simulation model based on pixel data obtained from medical images. The material properties and muscle fiber direction obtained from Diffusion Tensor Imaging (DTI) are input at each pixel point. The reproducing kernel (RK) approximation allows a representation of material heterogeneity with smooth transition. A ...