WorldWideScience

Sample records for kernel canonical correlation

  1. Pyrcca: regularized kernel canonical correlation analysis in Python and its applications to neuroimaging

    OpenAIRE

    Natalia Y Bilenko; Jack L Gallant; Jack L Gallant

    2016-01-01

    In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA). CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Py...

  2. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    Science.gov (United States)

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Applications of temporal kernel canonical correlation analysis in adherence studies.

    Science.gov (United States)

    John, Majnu; Lencz, Todd; Ferbinteanu, Janina; Gallego, Juan A; Robinson, Delbert G

    2017-10-01

    Adherence to medication is often measured as a continuous outcome but analyzed as a dichotomous outcome due to lack of appropriate tools. In this paper, we illustrate the use of the temporal kernel canonical correlation analysis (tkCCA) as a method to analyze adherence measurements and symptom levels on a continuous scale. The tkCCA is a novel method developed for studying the relationship between neural signals and hemodynamic response detected by functional MRI during spontaneous activity. Although the tkCCA is a powerful tool, it has not been utilized outside the application that it was originally developed for. In this paper, we simulate time series of symptoms and adherence levels for patients with a hypothetical brain disorder and show how the tkCCA can be used to understand the relationship between them. We also examine, via simulations, the behavior of the tkCCA under various missing value mechanisms and imputation methods. Finally, we apply the tkCCA to a real data example of psychotic symptoms and adherence levels obtained from a study based on subjects with a first episode of schizophrenia, schizophreniform or schizoaffective disorder.

  4. Pyrcca: Regularized Kernel Canonical Correlation Analysis in Python and Its Applications to Neuroimaging.

    Science.gov (United States)

    Bilenko, Natalia Y; Gallant, Jack L

    2016-01-01

    In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA). CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI) experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model.

  5. Adaptive Kernel Canonical Correlation Analysis Algorithms for Nonparametric Identification of Wiener and Hammerstein Systems

    Directory of Open Access Journals (Sweden)

    Ignacio Santamaría

    2008-04-01

    Full Text Available This paper treats the identification of nonlinear systems that consist of a cascade of a linear channel and a nonlinearity, such as the well-known Wiener and Hammerstein systems. In particular, we follow a supervised identification approach that simultaneously identifies both parts of the nonlinear system. Given the correct restrictions on the identification problem, we show how kernel canonical correlation analysis (KCCA emerges as the logical solution to this problem. We then extend the proposed identification algorithm to an adaptive version allowing to deal with time-varying systems. In order to avoid overfitting problems, we discuss and compare three possible regularization techniques for both the batch and the adaptive versions of the proposed algorithm. Simulations are included to demonstrate the effectiveness of the presented algorithm.

  6. Pyrcca: regularized kernel canonical correlation analysis in Python and its applications to neuroimaging

    Directory of Open Access Journals (Sweden)

    Natalia Y Bilenko

    2016-11-01

    Full Text Available In this article we introduce Pyrcca, an open-source Python package for performing canonical correlation analysis (CCA. CCA is a multivariate analysis method for identifying relationships between sets of variables. Pyrcca supports CCA with or without regularization, and with or without linear, polynomial, or Gaussian kernelization. We first use an abstract example to describe Pyrcca functionality. We then demonstrate how Pyrcca can be used to analyze neuroimaging data. Specifically, we use Pyrcca to implement cross-subject comparison in a natural movie functional magnetic resonance imaging (fMRI experiment by finding a data-driven set of functional response patterns that are similar across individuals. We validate this cross-subject comparison method in Pyrcca by predicting responses to novel natural movies across subjects. Finally, we show how Pyrcca can reveal retinotopic organization in brain responses to natural movies without the need for an explicit model.

  7. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  8. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  9. A multimodal stress monitoring system with canonical correlation analysis.

    Science.gov (United States)

    Unsoo Ha; Changhyeon Kim; Yongsu Lee; Hyunki Kim; Taehwan Roh; Hoi-Jun Yoo

    2015-08-01

    The multimodal stress monitoring headband is proposed for mobile stress management system. It is composed of headband and earplugs. Electroencephalography (EEG), hemoencephalography (HEG) and heart-rate variability (HRV) can be achieved simultaneously in the proposed system for user status estimation. With canonical correlation analysis (CCA) and temporal-kernel CCA (tkCCA) algorithm, those different signals can be combined for maximum correlation. Thanks to the proposed combination algorithm, the accuracy of the proposed system increased up to 19 percentage points than unimodal monitoring system in n-back task.

  10. Canonical Information Analysis

    DEFF Research Database (Denmark)

    Vestergaard, Jacob Schack; Nielsen, Allan Aasbjerg

    2015-01-01

    is replaced by the information theoretical, entropy based measure mutual information, which is a much more general measure of association. We make canonical information analysis feasible for large sample problems, including for example multispectral images, due to the use of a fast kernel density estimator......Canonical correlation analysis is an established multivariate statistical method in which correlation between linear combinations of multivariate sets of variables is maximized. In canonical information analysis introduced here, linear correlation as a measure of association between variables...... for entropy estimation. Canonical information analysis is applied successfully to (1) simple simulated data to illustrate the basic idea and evaluate performance, (2) fusion of weather radar and optical geostationary satellite data in a situation with heavy precipitation, and (3) change detection in optical...

  11. Functional Multiple-Set Canonical Correlation Analysis

    Science.gov (United States)

    Hwang, Heungsun; Jung, Kwanghee; Takane, Yoshio; Woodward, Todd S.

    2012-01-01

    We propose functional multiple-set canonical correlation analysis for exploring associations among multiple sets of functions. The proposed method includes functional canonical correlation analysis as a special case when only two sets of functions are considered. As in classical multiple-set canonical correlation analysis, computationally, the…

  12. Robust canonical correlations: A comparative study

    OpenAIRE

    Branco, JA; Croux, Christophe; Filzmoser, P; Oliveira, MR

    2005-01-01

    Several approaches for robust canonical correlation analysis will be presented and discussed. A first method is based on the definition of canonical correlation analysis as looking for linear combinations of two sets of variables having maximal (robust) correlation. A second method is based on alternating robust regressions. These methods axe discussed in detail and compared with the more traditional approach to robust canonical correlation via covariance matrix estimates. A simulation study ...

  13. Generalized canonical correlation analysis with missing values

    NARCIS (Netherlands)

    M. van de Velden (Michel); Y. Takane

    2012-01-01

    textabstractGeneralized canonical correlation analysis is a versatile technique that allows the joint analysis of several sets of data matrices. The generalized canonical correlation analysis solution can be obtained through an eigenequation and distributional assumptions are not required. When

  14. 3D spatially-adaptive canonical correlation analysis: Local and global methods.

    Science.gov (United States)

    Yang, Zhengshi; Zhuang, Xiaowei; Sreenivasan, Karthik; Mishra, Virendra; Curran, Tim; Byrd, Richard; Nandy, Rajesh; Cordes, Dietmar

    2018-04-01

    Local spatially-adaptive canonical correlation analysis (local CCA) with spatial constraints has been introduced to fMRI multivariate analysis for improved modeling of activation patterns. However, current algorithms require complicated spatial constraints that have only been applied to 2D local neighborhoods because the computational time would be exponentially increased if the same method is applied to 3D spatial neighborhoods. In this study, an efficient and accurate line search sequential quadratic programming (SQP) algorithm has been developed to efficiently solve the 3D local CCA problem with spatial constraints. In addition, a spatially-adaptive kernel CCA (KCCA) method is proposed to increase accuracy of fMRI activation maps. With oriented 3D spatial filters anisotropic shapes can be estimated during the KCCA analysis of fMRI time courses. These filters are orientation-adaptive leading to rotational invariance to better match arbitrary oriented fMRI activation patterns, resulting in improved sensitivity of activation detection while significantly reducing spatial blurring artifacts. The kernel method in its basic form does not require any spatial constraints and analyzes the whole-brain fMRI time series to construct an activation map. Finally, we have developed a penalized kernel CCA model that involves spatial low-pass filter constraints to increase the specificity of the method. The kernel CCA methods are compared with the standard univariate method and with two different local CCA methods that were solved by the SQP algorithm. Results show that SQP is the most efficient algorithm to solve the local constrained CCA problem, and the proposed kernel CCA methods outperformed univariate and local CCA methods in detecting activations for both simulated and real fMRI episodic memory data. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Multicollinearity in canonical correlation analysis in maize.

    Science.gov (United States)

    Alves, B M; Cargnelutti Filho, A; Burin, C

    2017-03-30

    The objective of this study was to evaluate the effects of multicollinearity under two methods of canonical correlation analysis (with and without elimination of variables) in maize (Zea mays L.) crop. Seventy-six maize genotypes were evaluated in three experiments, conducted in a randomized block design with three replications, during the 2009/2010 crop season. Eleven agronomic variables (number of days from sowing until female flowering, number of days from sowing until male flowering, plant height, ear insertion height, ear placement, number of plants, number of ears, ear index, ear weight, grain yield, and one thousand grain weight), 12 protein-nutritional variables (crude protein, lysine, methionine, cysteine, threonine, tryptophan, valine, isoleucine, leucine, phenylalanine, histidine, and arginine), and 6 energetic-nutritional variables (apparent metabolizable energy, apparent metabolizable energy corrected for nitrogen, ether extract, crude fiber, starch, and amylose) were measured. A phenotypic correlation matrix was first generated among the 29 variables for each of the experiments. A multicollinearity diagnosis was later performed within each group of variables using methodologies such as variance inflation factor and condition number. Canonical correlation analysis was then performed, with and without the elimination of variables, among groups of agronomic and protein-nutritional, and agronomic and energetic-nutritional variables. The canonical correlation analysis in the presence of multicollinearity (without elimination of variables) overestimates the variability of canonical coefficients. The elimination of variables is an efficient method to circumvent multicollinearity in canonical correlation analysis.

  16. Canonical correlations between chemical and energetic characteristics of lignocellulosic wastes

    Directory of Open Access Journals (Sweden)

    Thiago de Paula Protásio

    2012-09-01

    Full Text Available Canonical correlation analysis is a statistical multivariate procedure that allows analyzing linear correlation that may exist between two groups or sets of variables (X and Y. This paper aimed to provide canonical correlation analysis between a group comprised of lignin and total extractives contents and higher heating value (HHV with a group of elemental components (carbon, hydrogen, nitrogen and sulfur for lignocellulosic wastes. The following wastes were used: eucalyptus shavings; pine shavings; red cedar shavings; sugar cane bagasse; residual bamboo cellulose pulp; coffee husk and parchment; maize harvesting wastes; and rice husk. Only the first canonical function was significant, but it presented a low canonical R². High carbon, hydrogen and sulfur contents and low nitrogen contents seem to be related to high total extractives contents of the lignocellulosic wastes. The preliminary results found in this paper indicate that the canonical correlations were not efficient to explain the correlations between the chemical elemental components and lignin contents and higher heating values.

  17. Canonical correlation analysis of course and teacher evaluation

    DEFF Research Database (Denmark)

    Sliusarenko, Tamara; Ersbøll, Bjarne Kjær

    2010-01-01

    At the Technical University of Denmark course evaluations are performed by the students on a questionnaire. On one form the students are asked specific questions regarding the course. On a second form they are asked specific questions about the teacher. This study investigates the extent to which...... information obtained from the course evaluation form overlaps with information obtained from the teacher evaluation form. Employing canonical correlation analysis it was found that course and teacher evaluations are correlated. However, the structure of the canonical correlation is subject to change...

  18. Interpreting canonical correlation analysis through biplots of stucture correlations and weights

    NARCIS (Netherlands)

    Braak, ter C.J.F.

    1990-01-01

    This paper extends the biplot technique to canonical correlation analysis and redundancy analysis. The plot of structure correlations is shown to the optimal for displaying the pairwise correlations between the variables of the one set and those of the second. The link between multivariate

  19. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    Science.gov (United States)

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  20. DNA pattern recognition using canonical correlation algorithm.

    Science.gov (United States)

    Sarkar, B K; Chakraborty, Chiranjib

    2015-10-01

    We performed canonical correlation analysis as an unsupervised statistical tool to describe related views of the same semantic object for identifying patterns. A pattern recognition technique based on canonical correlation analysis (CCA) was proposed for finding required genetic code in the DNA sequence. Two related but different objects were considered: one was a particular pattern, and other was test DNA sequence. CCA found correlations between two observations of the same semantic pattern and test sequence. It is concluded that the relationship possesses maximum value in the position where the pattern exists. As a case study, the potential of CCA was demonstrated on the sequence found from HIV-1 preferred integration sites. The subsequences on the left and right flanking from the integration site were considered as the two views, and statistically significant relationships were established between these two views to elucidate the viral preference as an important factor for the correlation.

  1. Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing

    Energy Technology Data Exchange (ETDEWEB)

    Bo Lijun [Xidian University, Department of Mathematics (China); Wang Yongjin [Nankai University, School of Business (China); Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn [Nanjing University, School of Management and Engineering (China)

    2013-08-01

    We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.

  2. Canonical correlation between the gamma and X-ray data of Swift GRBs

    International Nuclear Information System (INIS)

    Balazs, L. G.; Horvath, I.; Meszaros, P.; Tusnady, G.; Veres, P.

    2009-01-01

    We used the canonical correlation analysis of the multivariate statistics to study the interrelation between the gamma (Fluence, 1 sec Peakflux, duration) and X-ray (early X flux, 24 hours X flux, X decay index, X spectral index, X HI column density) data. We computed the canonical correlations and variables showing that there is a significant interrelation between the gamma and X-ray data. Using the canonical variables resulted in the analysis we computed their correlations (canonical loadings) with the original ones. The canonical loadings revealed that the gamma-ray fluence and the early X-ray flux give the strongest contribution to the correlation in contrast to the X-ray decay index and spectral index. An interesting new result appears to be the strong contribution of the HI column density to the correlation. Accepting the collapsar model of long GRBs this effect may be interpreted as an indication for the ejection of an HI envelope by the progenitor in the course of producing the GRB.

  3. Music recommendation according to human motion based on kernel CCA-based relationship

    Science.gov (United States)

    Ohkushi, Hiroyuki; Ogawa, Takahiro; Haseyama, Miki

    2011-12-01

    In this article, a method for recommendation of music pieces according to human motions based on their kernel canonical correlation analysis (CCA)-based relationship is proposed. In order to perform the recommendation between different types of multimedia data, i.e., recommendation of music pieces from human motions, the proposed method tries to estimate their relationship. Specifically, the correlation based on kernel CCA is calculated as the relationship in our method. Since human motions and music pieces have various time lengths, it is necessary to calculate the correlation between time series having different lengths. Therefore, new kernel functions for human motions and music pieces, which can provide similarities between data that have different time lengths, are introduced into the calculation of the kernel CCA-based correlation. This approach effectively provides a solution to the conventional problem of not being able to calculate the correlation from multimedia data that have various time lengths. Therefore, the proposed method can perform accurate recommendation of best matched music pieces according to a target human motion from the obtained correlation. Experimental results are shown to verify the performance of the proposed method.

  4. CANONICAL CORRELATION OF MORPHOLOGIC CHARACTERISTIC AND MOTORIC ABILITIES OF YOUNG JUDO ATHLETES

    Directory of Open Access Journals (Sweden)

    Lulzim Ibri

    2013-07-01

    Full Text Available In sample from 80 young judo athletes aged from 16-17 year, was applied the system a total of 18 variables, of which 10 are morphologic characteristic and 8 motoric abilities variables, with a purpose to determinate mutual report between each other, while the information were analyzed by using canonical correlation analysis. With case of authentication statistically important relation was achieve one pair of canonical correlations statistically important. In morphologic variables field the canonical factor is interpreted in first canonical structure is the consists of variables: adipose tissue under skin of stomach (ATST, adipose tissue under skin of triceps (ATTR, adipose tissue under skin of biceps (ATBI, adipose tissue under skin of sub scapulars (ATSS, adipose tissue under skin of sub iliac a (ATSI and adipose tissue under skin of list (ATSL, so that is interpreted as a canonical factor of adipose tissue: And second structure of canonical factors of anthropometric characteristics is the consists of variables: body length: body length (LEBO, length of the leg (LELE and length of the arm (LEAR, so that is interpreted as a canonical factor of longitudinal dimensionality. The first structure of canonical factors in motoric variables is can not be interpreted because of low values of motor variables, while second structure of canonical factors of motoric abilities is the consists of variables: squeeze palm (SQPA, so that is interpreted as a canonical factor of strong factor in palm. Based on structure analysis of matrix results of canonical factors results were shown that to young judo athletes of this age exist statistically valid correlations between canonical factor of anthropometric variables and canonical factor of variables to motoric abilities which is (Rc=77 that is statistically valid in level (P=00.

  5. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  6. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  7. Kernel-Correlated Lévy Field Driven Forward Rate and Application to Derivative Pricing

    International Nuclear Information System (INIS)

    Bo Lijun; Wang Yongjin; Yang Xuewei

    2013-01-01

    We propose a term structure of forward rates driven by a kernel-correlated Lévy random field under the HJM framework. The kernel-correlated Lévy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure

  8. Generalized canonical correlation analysis with missing values

    NARCIS (Netherlands)

    M. van de Velden (Michel); Y. Takane

    2009-01-01

    textabstractTwo new methods for dealing with missing values in generalized canonical correlation analysis are introduced. The first approach, which does not require iterations, is a generalization of the Test Equating method available for principal component analysis. In the second approach,

  9. Climate Prediction Center(CPC)Ensemble Canonical Correlation Analysis Forecast of Temperature

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) temperature forecast is a 90-day (seasonal) outlook of US surface temperature anomalies. The ECCA uses Canonical...

  10. Hard and soft tissue correlations in facial profiles: a canonical correlation study

    Directory of Open Access Journals (Sweden)

    Shamlan MA

    2015-01-01

    Full Text Available Manal A Shamlan,1 Abdullah M Aldrees2 1Faculty of Dentistry, King Abdulaziz University, Jeddah, 2Division of Orthodontics, Department of Pediatric Dentistry and Orthodontics, College of Dentistry, King Saud University, Riyadh, Saudi Arabia Background: The purpose of this study was to analyze the relationship between facial hard and soft tissues in normal Saudi individuals by studying the canonical correlation between specific hard tissue landmarks and their corresponding soft tissue landmarks. Methods: A retrospective, cross-sectional study was designed, with a sample size of 60 Saudi adults (30 males and 30 females who had a class I skeletal and dental relationship and normal occlusion. Lateral cephalometric radiographs of the study sample were investigated using a series of 29 linear and angular measurements of hard and soft tissue features. The measurements were calculated electronically using Dolphin® software, and the data were analyzed using canonical correlation. Results: Eighty-four percent of the variation in the soft tissue was explained by the variation in hard tissue. Conclusion: The position of the upper and lower incisors and inclination of the lower incisors influence upper lip length and lower lip position. The inclination of the upper incisors is associated with lower lip length. Keywords: facial profile, hard tissue, soft tissue, canonical correlation

  11. Spatio-chromatic adaptation via higher-order canonical correlation analysis of natural images.

    Science.gov (United States)

    Gutmann, Michael U; Laparra, Valero; Hyvärinen, Aapo; Malo, Jesús

    2014-01-01

    Independent component and canonical correlation analysis are two general-purpose statistical methods with wide applicability. In neuroscience, independent component analysis of chromatic natural images explains the spatio-chromatic structure of primary cortical receptive fields in terms of properties of the visual environment. Canonical correlation analysis explains similarly chromatic adaptation to different illuminations. But, as we show in this paper, neither of the two methods generalizes well to explain both spatio-chromatic processing and adaptation at the same time. We propose a statistical method which combines the desirable properties of independent component and canonical correlation analysis: It finds independent components in each data set which, across the two data sets, are related to each other via linear or higher-order correlations. The new method is as widely applicable as canonical correlation analysis, and also to more than two data sets. We call it higher-order canonical correlation analysis. When applied to chromatic natural images, we found that it provides a single (unified) statistical framework which accounts for both spatio-chromatic processing and adaptation. Filters with spatio-chromatic tuning properties as in the primary visual cortex emerged and corresponding-colors psychophysics was reproduced reasonably well. We used the new method to make a theory-driven testable prediction on how the neural response to colored patterns should change when the illumination changes. We predict shifts in the responses which are comparable to the shifts reported for chromatic contrast habituation.

  12. KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION

    Directory of Open Access Journals (Sweden)

    Y. Bai

    2016-06-01

    Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.

  13. Nonlinear canonical correlation analysis with k sets of variables

    NARCIS (Netherlands)

    van der Burg, Eeke; de Leeuw, Jan

    1987-01-01

    The multivariate technique OVERALS is introduced as a non-linear generalization of canonical correlation analysis (CCA). First, two sets CCA is introduced. Two sets CCA is a technique that computes linear combinations of sets of variables that correlate in an optimal way. Two sets CCA is then

  14. The Bargmann transform and canonical transformations

    International Nuclear Information System (INIS)

    Villegas-Blas, Carlos

    2002-01-01

    This paper concerns a relationship between the kernel of the Bargmann transform and the corresponding canonical transformation. We study this fact for a Bargmann transform introduced by Thomas and Wassell [J. Math. Phys. 36, 5480-5505 (1995)]--when the configuration space is the two-sphere S 2 and for a Bargmann transform that we introduce for the three-sphere S 3 . It is shown that the kernel of the Bargmann transform is a power series in a function which is a generating function of the corresponding canonical transformation (a classical analog of the Bargmann transform). We show in each case that our canonical transformation is a composition of two other canonical transformations involving the complex null quadric in C 3 or C 4 . We also describe quantizations of those two other canonical transformations by dealing with spaces of holomorphic functions on the aforementioned null quadrics. Some of these quantizations have been studied by Bargmann and Todorov [J. Math. Phys. 18, 1141-1148 (1977)] and the other quantizations are related to the work of Guillemin [Integ. Eq. Operator Theory 7, 145-205 (1984)]. Since suitable infinite linear combinations of powers of the generating functions are coherent states for L 2 (S 2 ) or L 2 (S 3 ), we show finally that the studied Bargmann transforms are actually coherent states transforms

  15. Learning Rotation for Kernel Correlation Filter

    KAUST Repository

    Hamdi, Abdullah

    2017-08-11

    Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.

  16. DNA pattern recognition using canonical correlation algorithm

    Indian Academy of Sciences (India)

    2015-09-28

    Sep 28, 2015 ... were considered as the two views, and statistically significant relationships were established between these two ... Canonical correlation analysis is to find two sets of basis ..... Jing XY, Li S, Lan C, Zhang D, Yang JY and Liu Q 2011 Color ... Yu S, Yu K, Tresp V and Kriegel HP 2006 Multi-output regularized.

  17. Attenuation of the Squared Canonical Correlation Coefficient under Varying Estimates of Score Reliability

    Science.gov (United States)

    Wilson, Celia M.

    2010-01-01

    Research pertaining to the distortion of the squared canonical correlation coefficient has traditionally been limited to the effects of sampling error and associated correction formulas. The purpose of this study was to compare the degree of attenuation of the squared canonical correlation coefficient under varying conditions of score reliability.…

  18. Data analytics using canonical correlation analysis and Monte Carlo simulation

    Science.gov (United States)

    Rickman, Jeffrey M.; Wang, Yan; Rollett, Anthony D.; Harmer, Martin P.; Compson, Charles

    2017-07-01

    A canonical correlation analysis is a generic parametric model used in the statistical analysis of data involving interrelated or interdependent input and output variables. It is especially useful in data analytics as a dimensional reduction strategy that simplifies a complex, multidimensional parameter space by identifying a relatively few combinations of variables that are maximally correlated. One shortcoming of the canonical correlation analysis, however, is that it provides only a linear combination of variables that maximizes these correlations. With this in mind, we describe here a versatile, Monte-Carlo based methodology that is useful in identifying non-linear functions of the variables that lead to strong input/output correlations. We demonstrate that our approach leads to a substantial enhancement of correlations, as illustrated by two experimental applications of substantial interest to the materials science community, namely: (1) determining the interdependence of processing and microstructural variables associated with doped polycrystalline aluminas, and (2) relating microstructural decriptors to the electrical and optoelectronic properties of thin-film solar cells based on CuInSe2 absorbers. Finally, we describe how this approach facilitates experimental planning and process control.

  19. Generalized canonical analysis based on optimizing matrix correlations and a relation with IDIOSCAL

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Cléroux, R.; Ten Berge, Jos M.F.

    1994-01-01

    Carroll's method for generalized canonical analysis of two or more sets of variables is shown to optimize the sum of squared inner-product matrix correlations between a consensus matrix and matrices with canonical variates for each set of variables. In addition, the method that analogously optimizes

  20. A Top-Down Account of Linear Canonical Transforms

    Directory of Open Access Journals (Sweden)

    Kurt Bernardo Wolf

    2012-06-01

    Full Text Available We contend that what are called Linear Canonical Transforms (LCTs should be seen as a part of the theory of unitary irreducible representations of the '2+1' Lorentz group. The integral kernel representation found by Collins, Moshinsky and Quesne, and the radial and hyperbolic LCTs introduced thereafter, belong to the discrete and continuous representation series of the Lorentz group in its parabolic subgroup reduction. The reduction by the elliptic and hyperbolic subgroups can also be considered to yield LCTs that act on functions, discrete or continuous in other Hilbert spaces. We gather the summation and integration kernels reported by Basu and Wolf when studiying all discrete, continuous, and mixed representations of the linear group of 2×2 real matrices. We add some comments on why all should be considered canonical.

  1. Quantum correlations of ideal Bose and Fermi gases in the canonical ensemble

    International Nuclear Information System (INIS)

    Tsutsui, Kazumasa; Kita, Takafumi

    2016-01-01

    We derive an expression for the reduced density matrices of ideal Bose and Fermi gases in the canonical ensemble, which corresponds to the Bloch-De Dominicis (or Wick's) theorem in the grand canonical ensemble for normal-ordered products of operators. Using this expression, we study one- and two-body correlations of homogeneous ideal gases with N particles. The pair distribution function g (2) (r) of fermions clearly exhibits antibunching with g (2) (0) = 0 due to the Pauli exclusion principle at all temperatures, whereas that of normal bosons shows bunching with g (2) (0) ≈ 2, corresponding to the Hanbury Brown-Twiss effect. For bosons below the Bose-Einstein condensation temperature T 0 , an off-diagonal long-range order develops in the one-particle density matrix to reach g (1) (r) = 1 at T = 0, and the pair correlation starts to decrease towards g (2) (r) ≈ 1 at T = 0. The results for N → ∞ are seen to converge to those of the grand canonical ensemble obtained by assuming the average <ψ(r)> of the field operator ψ(r) below T 0 . This fact justifies the introduction of the 'anomalous' average <ψ(r)> ≠ 0 below T 0 in the grand canonical ensemble as a mathematical means of removing unphysical particle-number fluctuations to reproduce the canonical results in the thermodynamic limit. (author)

  2. Association Study between Lead and Zinc Accumulation at Different Physiological Systems of Cattle by Canonical Correlation and Canonical Correspondence Analyses

    Science.gov (United States)

    Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar; Karmakar, Sougata; Mazumdar, Debasis

    2010-10-01

    Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.

  3. Association Study between Lead and Zinc Accumulation at Different Physiological Systems of Cattle by Canonical Correlation and Canonical Correspondence Analyses

    International Nuclear Information System (INIS)

    Karmakar, Partha; Das, Pradip Kumar; Mondal, Seema Sarkar; Karmakar, Sougata; Mazumdar, Debasis

    2010-01-01

    Pb pollution from automobile exhausts around highways is a persistent problem in India. Pb intoxication in mammalian body is a complex phenomenon which is influence by agonistic and antagonistic interactions of several other heavy metals and micronutrients. An attempt has been made to study the association between Pb and Zn accumulation in different physiological systems of cattles (n = 200) by application of both canonical correlation and canonical correspondence analyses. Pb was estimated from plasma, liver, bone, muscle, kidney, blood and milk where as Zn was measured from all these systems except bone, blood and milk. Both statistical techniques demonstrated that there was a strong association among blood-Pb, liver-Zn, kidney-Zn and muscle-Zn. From observations, it can be assumed that Zn accumulation in cattles' muscle, liver and kidney directs Pb mobilization from those organs which in turn increases Pb pool in blood. It indicates antagonistic activity of Zn to the accumulation of Pb. Although there were some contradictions between the observations obtained from the two different statistical methods, the overall pattern of Pb accumulation in various organs as influenced by Zn were same. It is mainly due to the fact that canonical correlation is actually a special type of canonical correspondence analyses where linear relationship is followed between two groups of variables instead of Gaussian relationship.

  4. Relationship between organisational commitment and burnout syndrome: a canonical correlation approach.

    Science.gov (United States)

    Enginyurt, Ozgur; Cankaya, Soner; Aksay, Kadir; Tunc, Taner; Koc, Bozkurt; Bas, Orhan; Ozer, Erdal

    2016-04-01

    Objective Burnout syndrome can significantly reduce the performance of health workers. Although many factors have been identified as antecedents of burnout, few studies have investigated the role of organisational commitment in its development. The purpose of the present study was to examine the relationships between subdimensions of burnout syndrome (emotional exhaustion, depersonalisation and personal accomplishment) and subdimensions of organisational commitment (affective commitment, continuance commitment and normative commitment). Methods The present study was a cross-sectional survey of physicians and other healthcare employees working in the Ministry of Health Ordu University Education and Research Hospital. The sample consisted of 486 healthcare workers. Data were collected using the Maslach Burnout Inventory and the Organisation Commitment Scale, and were analysed using the canonical correlation approach. Results The first of three canonical correlation coefficients between pairs of canonical variables (Ui , burnout syndrome and Vi, organisational commitment) was found to be statistically significant. Emotional exhaustion was found to contribute most towards the explanatory capacity of canonical variables estimated from the subdimensions of burnout syndrome, whereas affective commitment provided the largest contribution towards the explanatory capacity of canonical variables estimated from the subdimensions of organisational commitment. Conclusions The results of the present study indicate that affective commitment is the primary determinant of burnout syndrome in healthcare professionals. What is known about the topic? Organisational commitment and burnout syndrome are the most important criteria in predicting health workforce performance. An increasing number of studies in recent years have clearly indicated the field's continued relevance and importance. Conversely, canonical correlation analysis (CCA) is a technique for describing the relationship

  5. Multiset canonical correlations analysis and multispectral, truly multitemporal remote sensing data.

    Science.gov (United States)

    Nielsen, Allan Aasbjerg

    2002-01-01

    This paper describes two- and multiset canonical correlations analysis (CCA) for data fusion, multisource, multiset, or multitemporal exploratory data analysis. These techniques transform multivariate multiset data into new orthogonal variables called canonical variates (CVs) which, when applied in remote sensing, exhibit ever-decreasing similarity (as expressed by correlation measures) over sets consisting of 1) spectral variables at fixed points in time (R-mode analysis), or 2) temporal variables with fixed wavelengths (T-mode analysis). The CVs are invariant to linear and affine transformations of the original variables within sets which means, for example, that the R-mode CVs are insensitive to changes over time in offset and gain in a measuring device. In a case study, CVs are calculated from Landsat Thematic Mapper (TM) data with six spectral bands over six consecutive years. Both Rand T-mode CVs clearly exhibit the desired characteristic: they show maximum similarity for the low-order canonical variates and minimum similarity for the high-order canonical variates. These characteristics are seen both visually and in objective measures. The results from the multiset CCA R- and T-mode analyses are very different. This difference is ascribed to the noise structure in the data. The CCA methods are related to partial least squares (PLS) methods. This paper very briefly describes multiset CCA-based multiset PLS. Also, the CCA methods can be applied as multivariate extensions to empirical orthogonal functions (EOF) techniques. Multiset CCA is well-suited for inclusion in geographical information systems (GIS).

  6. Relationship between climatic variables and the variation in bulk tank milk composition using canonical correlation analysis.

    Science.gov (United States)

    Stürmer, Morgana; Busanello, Marcos; Velho, João Pedro; Heck, Vanessa Isabel; Haygert-Velho, Ione Maria Pereira

    2018-06-04

    A number of studies have addressed the relations between climatic variables and milk composition, but these works used univariate statistical approaches. In our study, we used a multivariate approach (canonical correlation) to study the impact of climatic variables on milk composition, price, and monthly milk production at a dairy farm using bulk tank milk data. Data on milk composition, price, and monthly milk production were obtained from a dairy company that purchased the milk from the farm, while climatic variable data were obtained from the National Institute of Meteorology (INMET). The data are from January 2014 to December 2016. Univariate correlation analysis and canonical correlation analysis were performed. Few correlations between the climatic variables and milk composition were found using a univariate approach. However, using canonical correlation analysis, we found a strong and significant correlation (r c  = 0.95, p value = 0.0029). Lactose, ambient temperature measures (mean, minimum, and maximum), and temperature-humidity index (THI) were found to be the most important variables for the canonical correlation. Our study indicated that 10.2% of the variation in milk composition, pricing, and monthly milk production can be explained by climatic variables. Ambient temperature variables, together with THI, seem to have the most influence on variation in milk composition.

  7. The Application of Canonical Correlation to Two-Dimensional Contingency Tables

    Directory of Open Access Journals (Sweden)

    Alberto F. Restori

    2010-03-01

    Full Text Available This paper re-introduces and demonstrates the use of Mickey’s (1970 canonical correlation method in analyzing large two-dimensional contingency tables. This method of analysis supplements the traditional analysis using the Pearson chi-square. Examples and a MATLAB source listing are provided.

  8. A canonical correlation neural network for multicollinearity and functional data.

    Science.gov (United States)

    Gou, Zhenkun; Fyfe, Colin

    2004-03-01

    We review a recent neural implementation of Canonical Correlation Analysis and show, using ideas suggested by Ridge Regression, how to make the algorithm robust. The network is shown to operate on data sets which exhibit multicollinearity. We develop a second model which not only performs as well on multicollinear data but also on general data sets. This model allows us to vary a single parameter so that the network is capable of performing Partial Least Squares regression (at one extreme) to Canonical Correlation Analysis (at the other)and every intermediate operation between the two. On multicollinear data, the parameter setting is shown to be important but on more general data no particular parameter setting is required. Finally, we develop a second penalty term which acts on such data as a smoother in that the resulting weight vectors are much smoother and more interpretable than the weights without the robustification term. We illustrate our algorithms on both artificial and real data.

  9. Multiset Canonical Correlations Analysis and Multispectral, Truly Multitemporal Remote Sensing Data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2002-01-01

    This paper describes two- and multiset canonical correlations analysis (CCA) for data fusion, multi-source, multiset or multi-temporal exploratory data analysis. These techniques transform multivariate multiset data into new orthogonal variables called canonical variates (CVs) which when applied...... in remote sensing exhibit ever decreasing similarity (as expressed by correlation measures) over sets consisting of 1) spectral variables at fixed points in time (R-mode analysis), or 2) temporal variables with fixed wavelengths (T-mode analysis). The CVs are invariant to linear and affine transformations...... of the original variables within sets which means, for example, that the R-mode CVs are insensitive to changes over time in offset and gain in a measuring device. In a case study CVs are calculated from Landsat TM data with six spectral bands over six consecutive years. Both R- and T-mode CVs clearly exhibit...

  10. Canonical correlation analysis of professional stress,social support,and professional burnout among low-rank army officers

    Directory of Open Access Journals (Sweden)

    Chuan-yun LI

    2011-12-01

    Full Text Available Objective The present study investigates the influence of professional stress and social support on professional burnout among low-rank army officers.Methods The professional stress,social support,and professional burnout scales among low-rank army officers were used as test tools.Moreover,the officers of established units(battalion,company,and platoon were chosen as test subjects.Out of the 260 scales sent,226 effective scales were received.The descriptive statistic and canonical correlation analysis models were used to analyze the influence of each variable.Results The scores of low-rank army officers in the professional stress,social support,and professional burnout scales were more than average,except on two factors,namely,interpersonal support and de-individualization.The canonical analysis identified three groups of canonical correlation factors,of which two were up to a significant level(P < 0.001.After further eliminating the social support variable,the canonical correlation analysis of professional stress and burnout showed that the canonical correlation coefficients P corresponding to 1 and 2 were 0.62 and 0.36,respectively,and were up to a very significant level(P < 0.001.Conclusion The low-rank army officers experience higher professional stress and burnout levels,showing a lower sense of accomplishment,emotional exhaustion,and more serious depersonalization.However,social support can reduce the onset and seriousness of professional burnout among these officers by lessening pressure factors,such as career development,work features,salary conditions,and other personal factors.

  11. CANONICAL CORRELATION OF PHYSICAL AND CHEMICAL CHARACTERISTICS OF THE WOOD OF Eucalyptus grandis AND Eucalyptus saligna CLONES

    Directory of Open Access Journals (Sweden)

    Paulo Fernando Trugilho

    2003-01-01

    Full Text Available The analysis of canonical correlation measures the existence and the intensity of the association between two groups of variables. The research objectified to evaluate thecanonical correlation between chemical and physical characteristics and fiber dimensional ofwood of Eucalyptus grandis and Eucalyptus saligna clones, verifying the interdependenceamong the groups of studied variables. The analysis indicated that the canonical correlationswere high and that in two cases the first and second pair were significant at the level of 1% ofprobability. The analysis of canonical correlation showed that the groups are notindependent. The intergroup associations indicated that the wood of high insoluble lignin contentand low ash content is associated with the high radial and tangential contraction and highbasic density wood.

  12. Structured sparse canonical correlation analysis for brain imaging genetics: an improved GraphNet method.

    Science.gov (United States)

    Du, Lei; Huang, Heng; Yan, Jingwen; Kim, Sungeun; Risacher, Shannon L; Inlow, Mark; Moore, Jason H; Saykin, Andrew J; Shen, Li

    2016-05-15

    Structured sparse canonical correlation analysis (SCCA) models have been used to identify imaging genetic associations. These models either use group lasso or graph-guided fused lasso to conduct feature selection and feature grouping simultaneously. The group lasso based methods require prior knowledge to define the groups, which limits the capability when prior knowledge is incomplete or unavailable. The graph-guided methods overcome this drawback by using the sample correlation to define the constraint. However, they are sensitive to the sign of the sample correlation, which could introduce undesirable bias if the sign is wrongly estimated. We introduce a novel SCCA model with a new penalty, and develop an efficient optimization algorithm. Our method has a strong upper bound for the grouping effect for both positively and negatively correlated features. We show that our method performs better than or equally to three competing SCCA models on both synthetic and real data. In particular, our method identifies stronger canonical correlations and better canonical loading patterns, showing its promise for revealing interesting imaging genetic associations. The Matlab code and sample data are freely available at http://www.iu.edu/∼shenlab/tools/angscca/ shenli@iu.edu Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  13. Sparse canonical correlation analysis for identifying, connecting and completing gene-expression networks

    NARCIS (Netherlands)

    Waaijenborg, S.; Zwinderman, A.H.

    2009-01-01

    ABSTRACT: BACKGROUND: We generalized penalized canonical correlation analysis for analyzing microarray gene-expression measurements for checking completeness of known metabolic pathways and identifying candidate genes for incorporation in the pathway. We used Wold's method for calculation of the

  14. Methods of Weyl representation of the phase space and canonical transformations. 1

    International Nuclear Information System (INIS)

    Budanov, V.G.

    1984-01-01

    The kernel structure of canonical transformation and differential equation for the intertwining operator is found. The Weyl symbol of operators producing linear canonical transformations is associated with the Cayley transformation of classical canonical transformation. Due to the invariance of the Weyl formalism a complete study of singularity and factorization of these symbols is manageable. In particular, one can study the symbols of Green functions and elements of Lie groups and find the spectra of arbitrary stationary quadratic Hamiltonians with the help of the known classification of the spectra of classical systems

  15. Canonical correlation analysis of the career attitudes and strategies inventory and the adult career concerns inventory

    Directory of Open Access Journals (Sweden)

    Charlene C Lew

    2006-04-01

    Full Text Available This study investigated the relationships between the scales of the Adult Career Concerns Inventory (ACCI and those of the Career Attitudes and Strategies Inventory (CASI. The scores of 202 South African adults for the two inventories were subjected to a canonical correlation analysis. Two canonical variates made statistically significant contributions to the explanation of the relationships between the two sets of variables. Inspection of the correlations of the original variables with the first canonical variate suggested that a high level of career concerns in general, as measured by the ACCI, is associated with high levels of career worries, more geographical barriers, a low risk-taking style and a non-dominant interpersonal style, as measured by the CASI. The second canonical variate suggested that concerns with career exploration and advancement of one’s career is associated with low job satisfaction, low family commitment, high work involvement, and a dominant style at work.

  16. Sparse canonical correlation analysis: new formulation and algorithm.

    Science.gov (United States)

    Chu, Delin; Liao, Li-Zhi; Ng, Michael K; Zhang, Xiaowei

    2013-12-01

    In this paper, we study canonical correlation analysis (CCA), which is a powerful tool in multivariate data analysis for finding the correlation between two sets of multidimensional variables. The main contributions of the paper are: 1) to reveal the equivalent relationship between a recursive formula and a trace formula for the multiple CCA problem, 2) to obtain the explicit characterization for all solutions of the multiple CCA problem even when the corresponding covariance matrices are singular, 3) to develop a new sparse CCA algorithm, and 4) to establish the equivalent relationship between the uncorrelated linear discriminant analysis and the CCA problem. We test several simulated and real-world datasets in gene classification and cross-language document retrieval to demonstrate the effectiveness of the proposed algorithm. The performance of the proposed method is competitive with the state-of-the-art sparse CCA algorithms.

  17. Range-separated time-dependent density-functional theory with a frequency-dependent second-order Bethe-Salpeter correlation kernel

    Energy Technology Data Exchange (ETDEWEB)

    Rebolini, Elisa, E-mail: elisa.rebolini@kjemi.uio.no; Toulouse, Julien, E-mail: julien.toulouse@upmc.fr [Laboratoire de Chimie Théorique, Sorbonne Universités, UPMC Univ Paris 06, CNRS, 4 place Jussieu, F-75005 Paris (France)

    2016-03-07

    We present a range-separated linear-response time-dependent density-functional theory (TDDFT) which combines a density-functional approximation for the short-range response kernel and a frequency-dependent second-order Bethe-Salpeter approximation for the long-range response kernel. This approach goes beyond the adiabatic approximation usually used in linear-response TDDFT and aims at improving the accuracy of calculations of electronic excitation energies of molecular systems. A detailed derivation of the frequency-dependent second-order Bethe-Salpeter correlation kernel is given using many-body Green-function theory. Preliminary tests of this range-separated TDDFT method are presented for the calculation of excitation energies of the He and Be atoms and small molecules (H{sub 2}, N{sub 2}, CO{sub 2}, H{sub 2}CO, and C{sub 2}H{sub 4}). The results suggest that the addition of the long-range second-order Bethe-Salpeter correlation kernel overall slightly improves the excitation energies.

  18. Scalable and Flexible Multiview MAX-VAR Canonical Correlation Analysis

    Science.gov (United States)

    Fu, Xiao; Huang, Kejun; Hong, Mingyi; Sidiropoulos, Nicholas D.; So, Anthony Man-Cho

    2017-08-01

    Generalized canonical correlation analysis (GCCA) aims at finding latent low-dimensional common structure from multiple views (feature vectors in different domains) of the same entities. Unlike principal component analysis (PCA) that handles a single view, (G)CCA is able to integrate information from different feature spaces. Here we focus on MAX-VAR GCCA, a popular formulation which has recently gained renewed interest in multilingual processing and speech modeling. The classic MAX-VAR GCCA problem can be solved optimally via eigen-decomposition of a matrix that compounds the (whitened) correlation matrices of the views; but this solution has serious scalability issues, and is not directly amenable to incorporating pertinent structural constraints such as non-negativity and sparsity on the canonical components. We posit regularized MAX-VAR GCCA as a non-convex optimization problem and propose an alternating optimization (AO)-based algorithm to handle it. Our algorithm alternates between {\\em inexact} solutions of a regularized least squares subproblem and a manifold-constrained non-convex subproblem, thereby achieving substantial memory and computational savings. An important benefit of our design is that it can easily handle structure-promoting regularization. We show that the algorithm globally converges to a critical point at a sublinear rate, and approaches a global optimal solution at a linear rate when no regularization is considered. Judiciously designed simulations and large-scale word embedding tasks are employed to showcase the effectiveness of the proposed algorithm.

  19. Generalized canonical correlation analysis of matrices with missing rows : A simulation study

    NARCIS (Netherlands)

    van de Velden, Michel; Bijmolt, Tammo H. A.

    A method is presented for generalized canonical correlation analysis of two or more matrices with missing rows. The method is a combination of Carroll's (1968) method and the missing data approach of the OVERALS technique (Van der Burg, 1988). In a simulation study we assess the performance of the

  20. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  1. Climate Prediction Center (CPC)Ensemble Canonical Correlation Analysis 90-Day Seasonal Forecast of Precipitation

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Ensemble Canonical Correlation Analysis (ECCA) precipitation forecast is a 90-day (seasonal) outlook of US surface precipitation anomalies. The ECCA uses...

  2. Canonical correlation analysis of synchronous neural interactions and cognitive deficits in Alzheimer's dementia

    Science.gov (United States)

    Karageorgiou, Elissaios; Lewis, Scott M.; Riley McCarten, J.; Leuthold, Arthur C.; Hemmy, Laura S.; McPherson, Susan E.; Rottunda, Susan J.; Rubins, David M.; Georgopoulos, Apostolos P.

    2012-10-01

    In previous work (Georgopoulos et al 2007 J. Neural Eng. 4 349-55) we reported on the use of magnetoencephalographic (MEG) synchronous neural interactions (SNI) as a functional biomarker in Alzheimer's dementia (AD) diagnosis. Here we report on the application of canonical correlation analysis to investigate the relations between SNI and cognitive neuropsychological (NP) domains in AD patients. First, we performed individual correlations between each SNI and each NP, which provided an initial link between SNI and specific cognitive tests. Next, we performed factor analysis on each set, followed by a canonical correlation analysis between the derived SNI and NP factors. This last analysis optimally associated the entire MEG signal with cognitive function. The results revealed that SNI as a whole were mostly associated with memory and language, and, slightly less, executive function, processing speed and visuospatial abilities, thus differentiating functions subserved by the frontoparietal and the temporal cortices. These findings provide a direct interpretation of the information carried by the SNI and set the basis for identifying specific neural disease phenotypes according to cognitive deficits.

  3. Canonical transformations and hamiltonian path integrals

    International Nuclear Information System (INIS)

    Prokhorov, L.V.

    1982-01-01

    Behaviour of the Hamiltonian path integrals under canonical transformations produced by a generator, is investigated. An exact form is determined for the kernel of the unitary operator realizing the corresponding quantum transformation. Equivalence rules are found (the Hamiltonian formalism, one-dimensional case) enabling one to exclude non-standard terms from the action. It is shown that the Hamiltonian path integral changes its form under cononical transformations: in the transformed expression besides the classical Hamiltonian function there appear some non-classical terms

  4. Multi-template Scale-Adaptive Kernelized Correlation Filters

    KAUST Repository

    Bibi, Adel Aamer

    2015-12-07

    This paper identifies the major drawbacks of a very computationally efficient and state-of-the-art-tracker known as the Kernelized Correlation Filter (KCF) tracker. These drawbacks include an assumed fixed scale of the target in every frame, as well as, a heuristic update strategy of the filter taps to incorporate historical tracking information (i.e. simple linear combination of taps from the previous frame). In our approach, we update the scale of the tracker by maximizing over the posterior distribution of a grid of scales. As for the filter update, we prove and show that it is possible to use all previous training examples to update the filter taps very efficiently using fixed-point optimization. We validate the efficacy of our approach on two tracking datasets, VOT2014 and VOT2015.

  5. Multi-template Scale-Adaptive Kernelized Correlation Filters

    KAUST Repository

    Bibi, Adel Aamer; Ghanem, Bernard

    2015-01-01

    This paper identifies the major drawbacks of a very computationally efficient and state-of-the-art-tracker known as the Kernelized Correlation Filter (KCF) tracker. These drawbacks include an assumed fixed scale of the target in every frame, as well as, a heuristic update strategy of the filter taps to incorporate historical tracking information (i.e. simple linear combination of taps from the previous frame). In our approach, we update the scale of the tracker by maximizing over the posterior distribution of a grid of scales. As for the filter update, we prove and show that it is possible to use all previous training examples to update the filter taps very efficiently using fixed-point optimization. We validate the efficacy of our approach on two tracking datasets, VOT2014 and VOT2015.

  6. Canonical correlations between agronomic traits and seed physiological quality in segregating soybean populations.

    Science.gov (United States)

    Pereira, E M; Silva, F M; Val, B H P; Pizolato Neto, A; Mauro, A O; Martins, C C; Unêda-Trevisoli, S H

    2017-04-13

    The objective of this study was to evaluate the relationship between agronomic traits and physiological traits of seeds in segregating soybean populations by canonical correlation analysis. Seven populations and two commercial cultivars in three generations were used: F 3 plants and F 4 seeds; F 4 plants and F 5 seeds, and F 4 seeds and plants. The following agronomic traits (group I) were evaluated: number of days to maturity, plant height at maturity, insertion height of first pod, number of pods, grain yield, and oil content. The physiological quality of seeds (group II) was evaluated using germination, accelerated aging, emergence, and emergence rate index tests. The results showed that agronomic traits and physiological traits of seeds are not independent. Intergroup associations were established by the first canonical pair for the generation of F 3 plants and F 4 seeds, especially between more productive plants with a larger pod number and high oil content and seeds with a high germination percentage and emergence rate. For the generation of F 4 plants and F 5 seeds, the first canonical pair indicated an association between reduced maturity cycle, seeds with a high emergence percentage and a high percentage of normal seedlings after accelerated aging. According to the second canonical pair, more productive and taller plants were associated with seed vigor. For the generation of F 4 seeds and plants, the associations established by the first canonical pair occurred between seed vigor and more productive plants with high oil content and reduced maturity cycle, and those established by the second canonical pair between seeds of high physiological quality and tall plants.

  7. Preliminary Results of Autotuning GEMM Kernels for the NVIDIA Kepler Architecture- GeForce GTX 680

    Energy Technology Data Exchange (ETDEWEB)

    Kurzak, Jakub [Univ. of Tennessee, Knoxville, TN (United States); Luszczek, Pitor [Univ. of Tennessee, Knoxville, TN (United States); Tomov, Stanimire [Univ. of Tennessee, Knoxville, TN (United States); Dongarra, Jack [Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom)

    2012-04-01

    Kepler is the newest GPU architecture from NVIDIA, and the GTX 680 is the first commercially available graphics card based on that architecture. Matrix multiplication is a canonical computational kernel, and often the main target of initial optimization efforts for a new chip. This article presents preliminary results of automatically tuning matrix multiplication kernels for the Kepler architecture using the GTX 680 card.

  8. Non-linear canonical correlation for joint analysis of MEG signals from two subjects

    Directory of Open Access Journals (Sweden)

    Cristina eCampi

    2013-06-01

    Full Text Available We consider the problem of analysing magnetoencephalography (MEG data measured from two persons undergoing the same experiment, and we propose a method that searches for sources with maximally correlated energies. Our method is based on canonical correlation analysis (CCA, which provides linear transformations, one for each subject, such that the correlation between the transformed MEG signals is maximized. Here, we present a nonlinear version of CCA which measures the correlation of energies. Furthermore, we introduce a delay parameter in the modelto analyse, e.g., leader-follower changes in experiments where the two subjects are engaged in social interaction.

  9. The Quality of Life of Hemodialysis Patients Is Affected Not Only by Medical but also Psychosocial Factors: a Canonical Correlation Study.

    Science.gov (United States)

    Kim, Kyungmin; Kang, Gun Woo; Woo, Jungmin

    2018-04-02

    The quality of life (QoL) of patients with end-stage renal disease (ESRD) is very poor, plausibly due to both psychosocial and medical factors. This study aimed to determine the relationship among psychosocial factors, medical factors, and QoL in patients with ESRD undergoing hemodialysis (HD). In total, 55 male and 47 female patients were evaluated (mean age, 57.1 ± 12.0 years). The QoL was evaluated using the Korean version of World Health Organization Quality of Life Scale-Abbreviated Version. The psychosocial factors were evaluated using the Hospital Anxiety and Depression Scale, Multidimensional Scale of Perceived Social Support, Montreal Cognitive Assessment, Pittsburgh Sleep Quality Index, and Zarit Burden Interview. The medical factors were assessed using laboratory examinations. Correlation and canonical correlation analyses were performed to investigate the association patterns. The QoL was significantly correlated with the psychosocial factors, and to a lesser extent with the medical factors. The medical and psychosocial factors were also correlated. The canonical correlation analysis indicated a correlation between QoL and psychosocial factors (1st canonical correlation = 0.696, P psychosocial factors were also correlated (1st canonical correlation = 0.689, P Psychosocial factors influence QoL in patients with ESRD, and should thus be carefully considered when caring for these patients in clinical practice. © 2018 The Korean Academy of Medical Sciences.

  10. Registration of prone and supine CT colonography scans using correlation optimized warping and canonical correlation analysis

    International Nuclear Information System (INIS)

    Wang Shijun; Yao Jianhua; Liu Jiamin; Petrick, Nicholas; Van Uitert, Robert L.; Periaswamy, Senthil; Summers, Ronald M.

    2009-01-01

    Purpose: In computed tomographic colonography (CTC), a patient will be scanned twice--Once supine and once prone--to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans. Methods: We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlation analysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined by the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlation analysis to allow multiple features along the colon centerline to be used in our implementation. Results: We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from 46.27±52.97 to 14.98 mm±11.41 mm, compared to the normalized distance along the colon centerline algorithm (p<0.01). Conclusions: The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of z-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline.

  11. Methods of Weyl representation of the phase space and canonical transformations

    International Nuclear Information System (INIS)

    Budanov, V.G.

    1986-01-01

    The author studies nonlinear canonical transformations realized in the space of Weyl symbols of quantum operators. The kernels of the transformations, the symbol of the intertwining operator of the group of inhomogeneous point transformations, an the group characters are constructed. The group of PL transformations, which is the free produce of the group of point, p, and linear, L, transformations is considered. The simplest PL complexes relating problems with different potentials, in particular, containing a general Darboux transformation of the factorization method, are constructed. The kernel of an arbitrary element of the group PL is found

  12. Group sparse canonical correlation analysis for genomic data integration.

    Science.gov (United States)

    Lin, Dongdong; Zhang, Jigang; Li, Jingyao; Calhoun, Vince D; Deng, Hong-Wen; Wang, Yu-Ping

    2013-08-12

    The emergence of high-throughput genomic datasets from different sources and platforms (e.g., gene expression, single nucleotide polymorphisms (SNP), and copy number variation (CNV)) has greatly enhanced our understandings of the interplay of these genomic factors as well as their influences on the complex diseases. It is challenging to explore the relationship between these different types of genomic data sets. In this paper, we focus on a multivariate statistical method, canonical correlation analysis (CCA) method for this problem. Conventional CCA method does not work effectively if the number of data samples is significantly less than that of biomarkers, which is a typical case for genomic data (e.g., SNPs). Sparse CCA (sCCA) methods were introduced to overcome such difficulty, mostly using penalizations with l-1 norm (CCA-l1) or the combination of l-1and l-2 norm (CCA-elastic net). However, they overlook the structural or group effect within genomic data in the analysis, which often exist and are important (e.g., SNPs spanning a gene interact and work together as a group). We propose a new group sparse CCA method (CCA-sparse group) along with an effective numerical algorithm to study the mutual relationship between two different types of genomic data (i.e., SNP and gene expression). We then extend the model to a more general formulation that can include the existing sCCA models. We apply the model to feature/variable selection from two data sets and compare our group sparse CCA method with existing sCCA methods on both simulation and two real datasets (human gliomas data and NCI60 data). We use a graphical representation of the samples with a pair of canonical variates to demonstrate the discriminating characteristic of the selected features. Pathway analysis is further performed for biological interpretation of those features. The CCA-sparse group method incorporates group effects of features into the correlation analysis while performs individual feature

  13. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  14. A heat kernel proof of the index theorem for deformation quantization

    Science.gov (United States)

    Karabegov, Alexander

    2017-11-01

    We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.

  15. A heat kernel proof of the index theorem for deformation quantization

    OpenAIRE

    Karabegov, Alexander

    2017-01-01

    We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kahler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.

  16. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  17. Wigner Distribution Functions and the Representation of Canonical Transformations in Time-Dependent Quantum Mechanics

    Directory of Open Access Journals (Sweden)

    Marcos Moshinsky

    2008-07-01

    Full Text Available For classical canonical transformations, one can, using the Wigner transformation, pass from their representation in Hilbert space to a kernel in phase space. In this paper it will be discussed how the time-dependence of the uncertainties of the corresponding time-dependent quantum problems can be incorporated into this formalism.

  18. Comparison of JADE and canonical correlation analysis for ECG de-noising.

    Science.gov (United States)

    Kuzilek, Jakub; Kremen, Vaclav; Lhotska, Lenka

    2014-01-01

    This paper explores differences between two methods for blind source separation within frame of ECG de-noising. First method is joint approximate diagonalization of eigenmatrices, which is based on estimation of fourth order cross-cummulant tensor and its diagonalization. Second one is the statistical method known as canonical correlation analysis, which is based on estimation of correlation matrices between two multidimensional variables. Both methods were used within method, which combines the blind source separation algorithm with decision tree. The evaluation was made on large database of 382 long-term ECG signals and the results were examined. Biggest difference was found in results of 50 Hz power line interference where the CCA algorithm completely failed. Thus main power of CCA lies in estimation of unstructured noise within ECG. JADE algorithm has larger computational complexity thus the CCA perfomed faster when estimating the components.

  19. Multidimensional correlation among plan complexity, quality and deliverability parameters for volumetric-modulated arc therapy using canonical correlation analysis.

    Science.gov (United States)

    Shen, Lanxiao; Chen, Shan; Zhu, Xiaoyang; Han, Ce; Zheng, Xiaomin; Deng, Zhenxiang; Zhou, Yongqiang; Gong, Changfei; Xie, Congying; Jin, Xiance

    2018-03-01

    A multidimensional exploratory statistical method, canonical correlation analysis (CCA), was applied to evaluate the impact of complexity parameters on the plan quality and deliverability of volumetric-modulated arc therapy (VMAT) and to determine parameters in the generation of an ideal VMAT plan. Canonical correlations among complexity, quality and deliverability parameters of VMAT, as well as the contribution weights of different parameters were investigated with 71 two-arc VMAT nasopharyngeal cancer (NPC) patients, and further verified with 28 one-arc VMAT prostate cancer patients. The average MU and MU per control point (MU/CP) for two-arc VMAT plans were 702.6 ± 55.7 and 3.9 ± 0.3 versus 504.6 ± 99.2 and 5.6 ± 1.1 for one-arc VMAT plans, respectively. The individual volume-based 3D gamma passing rates of clinical target volume (γCTV) and planning target volume (γPTV) for NPC and prostate cancer patients were 85.7% ± 9.0% vs 92.6% ± 7.8%, and 88.0% ± 7.6% vs 91.2% ± 7.7%, respectively. Plan complexity parameters of NPC patients were correlated with plan quality (P = 0.047) and individual volume-based 3D gamma indices γ(IV) (P = 0.01), in which, MU/CP and segment area (SA) per control point (SA/CP) were weighted highly in correlation with γ(IV) , and SA/CP, percentage of CPs with SA plan quality with coefficients of 0.98, 0.68 and -0.99, respectively. Further verification with one-arc VMAT plans demonstrated similar results. In conclusion, MU, SA-related parameters and PTV volume were found to have strong effects on the plan quality and deliverability.

  20. Relativistic four-component calculations of indirect nuclear spin-spin couplings with efficient evaluation of the exchange-correlation response kernel

    Energy Technology Data Exchange (ETDEWEB)

    Křístková, Anežka; Malkin, Vladimir G. [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, SK-84536 Bratislava (Slovakia); Komorovsky, Stanislav; Repisky, Michal [Centre for Theoretical and Computational Chemistry, University of Tromsø - The Arctic University of Norway, N-9037 Tromsø (Norway); Malkina, Olga L., E-mail: olga.malkin@savba.sk [Institute of Inorganic Chemistry, Slovak Academy of Sciences, Dúbravská cesta 9, SK-84536 Bratislava (Slovakia); Department of Inorganic Chemistry, Comenius University, Bratislava (Slovakia)

    2015-03-21

    In this work, we report on the development and implementation of a new scheme for efficient calculation of indirect nuclear spin-spin couplings in the framework of four-component matrix Dirac-Kohn-Sham approach termed matrix Dirac-Kohn-Sham restricted magnetic balance resolution of identity for J and K, which takes advantage of the previous restricted magnetic balance formalism and the density fitting approach for the rapid evaluation of density functional theory exchange-correlation response kernels. The new approach is aimed to speedup the bottleneck in the solution of the coupled perturbed equations: evaluation of the matrix elements of the kernel of the exchange-correlation potential. The performance of the new scheme has been tested on a representative set of indirect nuclear spin-spin couplings. The obtained results have been compared with the corresponding results of the reference method with traditional evaluation of the exchange-correlation kernel, i.e., without employing the fitted electron densities. Overall good agreement between both methods was observed, though the new approach tends to give values by about 4%-5% higher than the reference method. On the average, the solution of the coupled perturbed equations with the new scheme is about 8.5 times faster compared to the reference method.

  1. Relativistic four-component calculations of indirect nuclear spin-spin couplings with efficient evaluation of the exchange-correlation response kernel

    International Nuclear Information System (INIS)

    Křístková, Anežka; Malkin, Vladimir G.; Komorovsky, Stanislav; Repisky, Michal; Malkina, Olga L.

    2015-01-01

    In this work, we report on the development and implementation of a new scheme for efficient calculation of indirect nuclear spin-spin couplings in the framework of four-component matrix Dirac-Kohn-Sham approach termed matrix Dirac-Kohn-Sham restricted magnetic balance resolution of identity for J and K, which takes advantage of the previous restricted magnetic balance formalism and the density fitting approach for the rapid evaluation of density functional theory exchange-correlation response kernels. The new approach is aimed to speedup the bottleneck in the solution of the coupled perturbed equations: evaluation of the matrix elements of the kernel of the exchange-correlation potential. The performance of the new scheme has been tested on a representative set of indirect nuclear spin-spin couplings. The obtained results have been compared with the corresponding results of the reference method with traditional evaluation of the exchange-correlation kernel, i.e., without employing the fitted electron densities. Overall good agreement between both methods was observed, though the new approach tends to give values by about 4%-5% higher than the reference method. On the average, the solution of the coupled perturbed equations with the new scheme is about 8.5 times faster compared to the reference method

  2. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  3. Sparse and smooth canonical correlation analysis through rank-1 matrix approximation

    Science.gov (United States)

    Aïssa-El-Bey, Abdeldjalil; Seghouane, Abd-Krim

    2017-12-01

    Canonical correlation analysis (CCA) is a well-known technique used to characterize the relationship between two sets of multidimensional variables by finding linear combinations of variables with maximal correlation. Sparse CCA and smooth or regularized CCA are two widely used variants of CCA because of the improved interpretability of the former and the better performance of the later. So far, the cross-matrix product of the two sets of multidimensional variables has been widely used for the derivation of these variants. In this paper, two new algorithms for sparse CCA and smooth CCA are proposed. These algorithms differ from the existing ones in their derivation which is based on penalized rank-1 matrix approximation and the orthogonal projectors onto the space spanned by the two sets of multidimensional variables instead of the simple cross-matrix product. The performance and effectiveness of the proposed algorithms are tested on simulated experiments. On these results, it can be observed that they outperform the state of the art sparse CCA algorithms.

  4. Extension of Kirkwood-Buff theory to the canonical ensemble

    Science.gov (United States)

    Rogers, David M.

    2018-02-01

    Kirkwood-Buff (KB) integrals are notoriously difficult to converge from a canonical simulation because they require estimating the grand-canonical radial distribution. The same essential difficulty is encountered when attempting to estimate the direct correlation function of Ornstein-Zernike theory by inverting the pair correlation functions. We present a new theory that applies to the entire, finite, simulation volume, so that no cutoff issues arise at all. The theory gives the direct correlation function for closed systems, while smoothness of the direct correlation function in reciprocal space allows calculating canonical KB integrals via a well-posed extrapolation to the origin. The present analysis method represents an improvement over previous work because it makes use of the entire simulation volume and its convergence can be accelerated using known properties of the direct correlation function. Using known interaction energy functions can make this extrapolation near perfect accuracy in the low-density case. Because finite size effects are stronger in the canonical than in the grand-canonical ensemble, we state ensemble correction formulas for the chemical potential and the KB coefficients. The new theory is illustrated with both analytical and simulation results on the 1D Ising model and a supercritical Lennard-Jones fluid. For the latter, the finite-size corrections are shown to be small.

  5. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. A learning algorithm for adaptive canonical correlation analysis of several data sets.

    Science.gov (United States)

    Vía, Javier; Santamaría, Ignacio; Pérez, Jesús

    2007-01-01

    Canonical correlation analysis (CCA) is a classical tool in statistical analysis to find the projections that maximize the correlation between two data sets. In this work we propose a generalization of CCA to several data sets, which is shown to be equivalent to the classical maximum variance (MAXVAR) generalization proposed by Kettenring. The reformulation of this generalization as a set of coupled least squares regression problems is exploited to develop a neural structure for CCA. In particular, the proposed CCA model is a two layer feedforward neural network with lateral connections in the output layer to achieve the simultaneous extraction of all the CCA eigenvectors through deflation. The CCA neural model is trained using a recursive least squares (RLS) algorithm. Finally, the convergence of the proposed learning rule is proved by means of stochastic approximation techniques and their performance is analyzed through simulations.

  7. Prediction of East African Seasonal Rainfall Using Simplex Canonical Correlation Analysis.

    Science.gov (United States)

    Ntale, Henry K.; Yew Gan, Thian; Mwale, Davison

    2003-06-01

    A linear statistical model, canonical correlation analysis (CCA), was driven by the Nelder-Mead simplex optimization algorithm (called CCA-NMS) to predict the standardized seasonal rainfall totals of East Africa at 3-month lead time using SLP and SST anomaly fields of the Indian and Atlantic Oceans combined together by 24 simplex optimized weights, and then `reduced' by the principal component analysis. Applying the optimized weights to the predictor fields produced better March-April-May (MAM) and September-October-November (SON) seasonal rain forecasts than a direct application of the same, unweighted predictor fields to CCA at both calibration and validation stages. Northeastern Tanzania and south-central Kenya had the best SON prediction results with both validation correlation and Hanssen-Kuipers skill scores exceeding +0.3. The MAM season was better predicted in the western parts of East Africa. The CCA correlation maps showed that low SON rainfall in East Africa is associated with cold SSTs off the Somali coast and the Benguela (Angola) coast, and low MAM rainfall is associated with a buildup of low SSTs in the Indian Ocean adjacent to East Africa and the Gulf of Guinea.

  8. Sparse Canonical Correlation Analysis via Truncated ℓ1-norm with Application to Brain Imaging Genetics.

    Science.gov (United States)

    Du, Lei; Zhang, Tuo; Liu, Kefei; Yao, Xiaohui; Yan, Jingwen; Risacher, Shannon L; Guo, Lei; Saykin, Andrew J; Shen, Li

    2016-01-01

    Discovering bi-multivariate associations between genetic markers and neuroimaging quantitative traits is a major task in brain imaging genetics. Sparse Canonical Correlation Analysis (SCCA) is a popular technique in this area for its powerful capability in identifying bi-multivariate relationships coupled with feature selection. The existing SCCA methods impose either the ℓ 1 -norm or its variants. The ℓ 0 -norm is more desirable, which however remains unexplored since the ℓ 0 -norm minimization is NP-hard. In this paper, we impose the truncated ℓ 1 -norm to improve the performance of the ℓ 1 -norm based SCCA methods. Besides, we propose two efficient optimization algorithms and prove their convergence. The experimental results, compared with two benchmark methods, show that our method identifies better and meaningful canonical loading patterns in both simulated and real imaging genetic analyse.

  9. Data-based diffraction kernels for surface waves from convolution and correlation processes through active seismic interferometry

    Science.gov (United States)

    Chmiel, Malgorzata; Roux, Philippe; Herrmann, Philippe; Rondeleux, Baptiste; Wathelet, Marc

    2018-05-01

    We investigated the construction of diffraction kernels for surface waves using two-point convolution and/or correlation from land active seismic data recorded in the context of exploration geophysics. The high density of controlled sources and receivers, combined with the application of the reciprocity principle, allows us to retrieve two-dimensional phase-oscillation diffraction kernels (DKs) of surface waves between any two source or receiver points in the medium at each frequency (up to 15 Hz, at least). These DKs are purely data-based as no model calculations and no synthetic data are needed. They naturally emerge from the interference patterns of the recorded wavefields projected on the dense array of sources and/or receivers. The DKs are used to obtain multi-mode dispersion relations of Rayleigh waves, from which near-surface shear velocity can be extracted. Using convolution versus correlation with a grid of active sources is an important step in understanding the physics of the retrieval of surface wave Green's functions. This provides the foundation for future studies based on noise sources or active sources with a sparse spatial distribution.

  10. Relations between canonical and non-canonical inflation

    Energy Technology Data Exchange (ETDEWEB)

    Gwyn, Rhiannon [Max-Planck-Institut fuer Gravitationsphysik (Albert-Einstein-Institut), Potsdam (Germany); Rummel, Markus [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Westphal, Alexander [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany). Theory Group

    2012-12-15

    We look for potential observational degeneracies between canonical and non-canonical models of inflation of a single field {phi}. Non-canonical inflationary models are characterized by higher than linear powers of the standard kinetic term X in the effective Lagrangian p(X,{phi}) and arise for instance in the context of the Dirac-Born-Infeld (DBI) action in string theory. An on-shell transformation is introduced that transforms non-canonical inflationary theories to theories with a canonical kinetic term. The 2-point function observables of the original non-canonical theory and its canonical transform are found to match in the case of DBI inflation.

  11. A canonical correlation analysis based EMG classification algorithm for eliminating electrode shift effect.

    Science.gov (United States)

    Zhe Fan; Zhong Wang; Guanglin Li; Ruomei Wang

    2016-08-01

    Motion classification system based on surface Electromyography (sEMG) pattern recognition has achieved good results in experimental condition. But it is still a challenge for clinical implement and practical application. Many factors contribute to the difficulty of clinical use of the EMG based dexterous control. The most obvious and important is the noise in the EMG signal caused by electrode shift, muscle fatigue, motion artifact, inherent instability of signal and biological signals such as Electrocardiogram. In this paper, a novel method based on Canonical Correlation Analysis (CCA) was developed to eliminate the reduction of classification accuracy caused by electrode shift. The average classification accuracy of our method were above 95% for the healthy subjects. In the process, we validated the influence of electrode shift on motion classification accuracy and discovered the strong correlation with correlation coefficient of >0.9 between shift position data and normal position data.

  12. Canonical correlation analysis for gene-based pleiotropy discovery.

    Directory of Open Access Journals (Sweden)

    Jose A Seoane

    2014-10-01

    Full Text Available Genome-wide association studies have identified a wealth of genetic variants involved in complex traits and multifactorial diseases. There is now considerable interest in testing variants for association with multiple phenotypes (pleiotropy and for testing multiple variants for association with a single phenotype (gene-based association tests. Such approaches can increase statistical power by combining evidence for association over multiple phenotypes or genetic variants respectively. Canonical Correlation Analysis (CCA measures the correlation between two sets of multidimensional variables, and thus offers the potential to combine these two approaches. To apply CCA, we must restrict the number of attributes relative to the number of samples. Hence we consider modules of genetic variation that can comprise a gene, a pathway or another biologically relevant grouping, and/or a set of phenotypes. In order to do this, we use an attribute selection strategy based on a binary genetic algorithm. Applied to a UK-based prospective cohort study of 4286 women (the British Women's Heart and Health Study, we find improved statistical power in the detection of previously reported genetic associations, and identify a number of novel pleiotropic associations between genetic variants and phenotypes. New discoveries include gene-based association of NSF with triglyceride levels and several genes (ACSM3, ERI2, IL18RAP, IL23RAP and NRG1 with left ventricular hypertrophy phenotypes. In multiple-phenotype analyses we find association of NRG1 with left ventricular hypertrophy phenotypes, fibrinogen and urea and pleiotropic relationships of F7 and F10 with Factor VII, Factor IX and cholesterol levels.

  13. Personality Traits as Predictors of Shopping Motivations and Behaviors: A Canonical Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Ali Gohary

    2014-10-01

    Full Text Available This study examines the relationship between Big Five personality traits with shopping motivation variables consisting of compulsive and impulsive buying, hedonic and utilitarian shopping values. Two hundred forty seven college students were recruited to participate in this research. Bivariate correlation demonstrates an overlap between personality traits; consequently, canonical correlation was performed to prevent this phenomenon. The results of multiple regression analysis suggested conscientiousness, neuroticism and openness as predictors of compulsive buying, impulsive buying and utilitarian shopping values. In addition, the results showed significant differences between males and females on conscientiousness, neuroticism, openness, compulsive buying and hedonic shopping value. Besides, using hierarchical regression analysis, we examined sex as moderator between Big Five personality traits and shopping variables, but we didn’t find sufficient evidence to prove it.

  14. The relationship between procrastination, learning strategies and statistics anxiety among Iranian college students: a canonical correlation analysis.

    Science.gov (United States)

    Vahedi, Shahrum; Farrokhi, Farahman; Gahramani, Farahnaz; Issazadegan, Ali

    2012-01-01

    Approximately 66-80%of graduate students experience statistics anxiety and some researchers propose that many students identify statistics courses as the most anxiety-inducing courses in their academic curriculums. As such, it is likely that statistics anxiety is, in part, responsible for many students delaying enrollment in these courses for as long as possible. This paper proposes a canonical model by treating academic procrastination (AP), learning strategies (LS) as predictor variables and statistics anxiety (SA) as explained variables. A questionnaire survey was used for data collection and 246-college female student participated in this study. To examine the mutually independent relations between procrastination, learning strategies and statistics anxiety variables, a canonical correlation analysis was computed. Findings show that two canonical functions were statistically significant. The set of variables (metacognitive self-regulation, source management, preparing homework, preparing for test and preparing term papers) helped predict changes of statistics anxiety with respect to fearful behavior, Attitude towards math and class, Performance, but not Anxiety. These findings could be used in educational and psychological interventions in the context of statistics anxiety reduction.

  15. Canonical variate regression.

    Science.gov (United States)

    Luo, Chongliang; Liu, Jin; Dey, Dipak K; Chen, Kun

    2016-07-01

    In many fields, multi-view datasets, measuring multiple distinct but interrelated sets of characteristics on the same set of subjects, together with data on certain outcomes or phenotypes, are routinely collected. The objective in such a problem is often two-fold: both to explore the association structures of multiple sets of measurements and to develop a parsimonious model for predicting the future outcomes. We study a unified canonical variate regression framework to tackle the two problems simultaneously. The proposed criterion integrates multiple canonical correlation analysis with predictive modeling, balancing between the association strength of the canonical variates and their joint predictive power on the outcomes. Moreover, the proposed criterion seeks multiple sets of canonical variates simultaneously to enable the examination of their joint effects on the outcomes, and is able to handle multivariate and non-Gaussian outcomes. An efficient algorithm based on variable splitting and Lagrangian multipliers is proposed. Simulation studies show the superior performance of the proposed approach. We demonstrate the effectiveness of the proposed approach in an [Formula: see text] intercross mice study and an alcohol dependence study. © The Author 2016. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. Social inequality, lifestyles and health - a non-linear canonical correlation analysis based on the approach of Pierre Bourdieu.

    Science.gov (United States)

    Grosse Frie, Kirstin; Janssen, Christian

    2009-01-01

    Based on the theoretical and empirical approach of Pierre Bourdieu, a multivariate non-linear method is introduced as an alternative way to analyse the complex relationships between social determinants and health. The analysis is based on face-to-face interviews with 695 randomly selected respondents aged 30 to 59. Variables regarding socio-economic status, life circumstances, lifestyles, health-related behaviour and health were chosen for the analysis. In order to determine whether the respondents can be differentiated and described based on these variables, a non-linear canonical correlation analysis (OVERALS) was performed. The results can be described on three dimensions; Eigenvalues add up to the fit of 1.444, which can be interpreted as approximately 50 % of explained variance. The three-dimensional space illustrates correspondences between variables and provides a framework for interpretation based on latent dimensions, which can be described by age, education, income and gender. Using non-linear canonical correlation analysis, health characteristics can be analysed in conjunction with socio-economic conditions and lifestyles. Based on Bourdieus theoretical approach, the complex correlations between these variables can be more substantially interpreted and presented.

  17. fCCAC: functional canonical correlation analysis to evaluate covariance between nucleic acid sequencing datasets.

    Science.gov (United States)

    Madrigal, Pedro

    2017-03-01

    Computational evaluation of variability across DNA or RNA sequencing datasets is a crucial step in genomic science, as it allows both to evaluate reproducibility of biological or technical replicates, and to compare different datasets to identify their potential correlations. Here we present fCCAC, an application of functional canonical correlation analysis to assess covariance of nucleic acid sequencing datasets such as chromatin immunoprecipitation followed by deep sequencing (ChIP-seq). We show how this method differs from other measures of correlation, and exemplify how it can reveal shared covariance between histone modifications and DNA binding proteins, such as the relationship between the H3K4me3 chromatin mark and its epigenetic writers and readers. An R/Bioconductor package is available at http://bioconductor.org/packages/fCCAC/ . pmb59@cam.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  18. Discrete canonical transforms that are Hadamard matrices

    International Nuclear Information System (INIS)

    Healy, John J; Wolf, Kurt Bernardo

    2011-01-01

    The group Sp(2,R) of symplectic linear canonical transformations has an integral kernel which has quadratic and linear phases, and which is realized by the geometric paraxial optical model. The discrete counterpart of this model is a finite Hamiltonian system that acts on N-point signals through N x N matrices whose elements also have a constant absolute value, although they do not form a representation of that group. Those matrices that are also unitary are Hadamard matrices. We investigate the manifolds of these N x N matrices under the Sp(2,R) equivalence imposed by the model, and find them to be on two-sided cosets. By means of an algorithm we determine representatives that lead to collections of mutually unbiased bases.

  19. The Relationship Between Procrastination, Learning Strategies and Statistics Anxiety Among Iranian College Students: A Canonical Correlation Analysis

    Science.gov (United States)

    Vahedi, Shahrum; Farrokhi, Farahman; Gahramani, Farahnaz; Issazadegan, Ali

    2012-01-01

    Objective: Approximately 66-80%of graduate students experience statistics anxiety and some researchers propose that many students identify statistics courses as the most anxiety-inducing courses in their academic curriculums. As such, it is likely that statistics anxiety is, in part, responsible for many students delaying enrollment in these courses for as long as possible. This paper proposes a canonical model by treating academic procrastination (AP), learning strategies (LS) as predictor variables and statistics anxiety (SA) as explained variables. Methods: A questionnaire survey was used for data collection and 246-college female student participated in this study. To examine the mutually independent relations between procrastination, learning strategies and statistics anxiety variables, a canonical correlation analysis was computed. Results: Findings show that two canonical functions were statistically significant. The set of variables (metacognitive self-regulation, source management, preparing homework, preparing for test and preparing term papers) helped predict changes of statistics anxiety with respect to fearful behavior, Attitude towards math and class, Performance, but not Anxiety. Conclusion: These findings could be used in educational and psychological interventions in the context of statistics anxiety reduction. PMID:24644468

  20. SU-F-P-64: The Impact of Plan Complexity Parameters On the Plan Quality and Deliverability of Volumetric Modulated Arc Therapy with Canonical Correlation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Jin, X; Yi, J; Xie, C [The 1st Affiliated Hospital of Wenzhou Medical University, Wenzhou, Zhejiang (China)

    2016-06-15

    Purpose: To evaluate the impact of complexity indices on the plan quality and deliverability of volumetric modulated arc therapy (VMAT), and to determine the most significant parameters in the generation of an ideal VMAT plan. Methods: A multi-dimensional exploratory statistical method, canonical correlation analysis (CCA) was adopted to study the correlations between VMAT parameters of complexity, quality and deliverability, as well as their contribution weights with 32 two-arc VMAT nasopharyngeal cancer (NPC) patients and 31 one-arc VMAT prostate cancer patients. Results: The MU per arc (MU/Arc) and MU per control point (MU/CP) of NPC were 337.8±25.2 and 3.7±0.3, respectively, which were significantly lower than those of prostate cancer patients (MU/Arc : 506.9±95.4, MU/CP : 5.6±1.1). The plan complexity indices indicated that two-arc VMAT plans were more complex than one-arc VMAT plans. Plan quality comparison confirmed that one-arc VMAT plans had a high quality than two-arc VMAT plans. CCA results implied that plan complexity parameters were highly correlated with plan quality with the first two canonical correlations of 0.96, 0.88 (both p<0.001) and significantly correlated with deliverability with the first canonical correlation of 0.79 (p<0.001), plan quality and deliverability was also correlated with the first canonical correlation of 0.71 (p=0.02). Complexity parameters of MU/CP, segment area (SA) per CP, percent of MU/CP less 3 and planning target volume (PTV) were weighted heavily in correlation with plan quality and deliveability . Similar results obtained from individual NPC and prostate CCA analysis. Conclusion: Relationship between complexity, quality, and deliverability parameters were investigated with CCA. MU, SA related parameters and PTV volume were found to have strong effect on the plan quality and deliverability. The presented correlation among different quantified parameters could be used to improve the plan quality and the efficiency

  1. A canonical approach to forces in molecules

    Energy Technology Data Exchange (ETDEWEB)

    Walton, Jay R. [Department of Mathematics, Texas A& M University, College Station, TX 77843-3368 (United States); Rivera-Rivera, Luis A., E-mail: rivera@chem.tamu.edu [Department of Chemistry, Texas A& M University, College Station, TX 77843-3255 (United States); Lucchese, Robert R.; Bevan, John W. [Department of Chemistry, Texas A& M University, College Station, TX 77843-3255 (United States)

    2016-08-02

    Highlights: • Derivation of canonical representation of molecular force. • Correlation of derivations with accurate results from Born–Oppenheimer potentials. • Extension of methodology to Mg{sub 2}, benzene dimer, and water dimer. - Abstract: In previous studies, we introduced a generalized formulation for canonical transformations and spectra to investigate the concept of canonical potentials strictly within the Born–Oppenheimer approximation. Data for the most accurate available ground electronic state pairwise intramolecular potentials in H{sub 2}{sup +}, H{sub 2}, HeH{sup +}, and LiH were used to rigorously establish such conclusions. Now, a canonical transformation is derived for the molecular force, F(R), with H{sub 2}{sup +} as molecular reference. These transformations are demonstrated to be inherently canonical to high accuracy but distinctly different from those corresponding to the respective potentials of H{sub 2}, HeH{sup +}, and LiH. In this paper, we establish the canonical nature of the molecular force which is key to fundamental generalization of canonical approaches to molecular bonding. As further examples Mg{sub 2}, benzene dimer and to water dimer are also considered within the radial limit as applications of the current methodology.

  2. Canonical correlation analysis of infant's size at birth and maternal factors: a study in rural northwest Bangladesh.

    Science.gov (United States)

    Kabir, Alamgir; Merrill, Rebecca D; Shamim, Abu Ahmed; Klemn, Rolf D W; Labrique, Alain B; Christian, Parul; West, Keith P; Nasser, Mohammed

    2014-01-01

    This analysis was conducted to explore the association between 5 birth size measurements (weight, length and head, chest and mid-upper arm [MUAC] circumferences) as dependent variables and 10 maternal factors as independent variables using canonical correlation analysis (CCA). CCA considers simultaneously sets of dependent and independent variables and, thus, generates a substantially reduced type 1 error. Data were from women delivering a singleton live birth (n = 14,506) while participating in a double-masked, cluster-randomized, placebo-controlled maternal vitamin A or β-carotene supplementation trial in rural Bangladesh. The first canonical correlation was 0.42 (P<0.001), demonstrating a moderate positive correlation mainly between the 5 birth size measurements and 5 maternal factors (preterm delivery, early pregnancy MUAC, infant sex, age and parity). A significant interaction between infant sex and preterm delivery on birth size was also revealed from the score plot. Thirteen percent of birth size variability was explained by the composite score of the maternal factors (Redundancy, RY/X = 0.131). Given an ability to accommodate numerous relationships and reduce complexities of multiple comparisons, CCA identified the 5 maternal variables able to predict birth size in this rural Bangladesh setting. CCA may offer an efficient, practical and inclusive approach to assessing the association between two sets of variables, addressing the innate complexity of interactions.

  3. Canonical correlation analysis of infant's size at birth and maternal factors: a study in rural northwest Bangladesh.

    Directory of Open Access Journals (Sweden)

    Alamgir Kabir

    Full Text Available This analysis was conducted to explore the association between 5 birth size measurements (weight, length and head, chest and mid-upper arm [MUAC] circumferences as dependent variables and 10 maternal factors as independent variables using canonical correlation analysis (CCA. CCA considers simultaneously sets of dependent and independent variables and, thus, generates a substantially reduced type 1 error. Data were from women delivering a singleton live birth (n = 14,506 while participating in a double-masked, cluster-randomized, placebo-controlled maternal vitamin A or β-carotene supplementation trial in rural Bangladesh. The first canonical correlation was 0.42 (P<0.001, demonstrating a moderate positive correlation mainly between the 5 birth size measurements and 5 maternal factors (preterm delivery, early pregnancy MUAC, infant sex, age and parity. A significant interaction between infant sex and preterm delivery on birth size was also revealed from the score plot. Thirteen percent of birth size variability was explained by the composite score of the maternal factors (Redundancy, RY/X = 0.131. Given an ability to accommodate numerous relationships and reduce complexities of multiple comparisons, CCA identified the 5 maternal variables able to predict birth size in this rural Bangladesh setting. CCA may offer an efficient, practical and inclusive approach to assessing the association between two sets of variables, addressing the innate complexity of interactions.

  4. Semiotic Analysis of Canon Camera Advertisements

    OpenAIRE

    INDRAWATI, SUSAN

    2015-01-01

    Keywords: Semiotic Analysis, Canon Camera, Advertisement. Advertisement is a medium to deliver message to people with the goal to influence the to use certain products. Semiotics is applied to develop a correlation within element used in an advertisement. In this study, the writer chose the Semiotic analysis of canon camera advertisement as the subject to be analyzed using semiotic study based on Peirce's theory. Semiotic approach is employed in interpreting the sign, symbol, icon, and index ...

  5. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    Energy Technology Data Exchange (ETDEWEB)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez, E-mail: valter.costa@usp.b [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  6. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    International Nuclear Information System (INIS)

    Costa, Valter Magalhaes; Pereira, Iraci Martinez

    2011-01-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because a previous diagnosis allows the correction of the fault and, like this, to prevent the production stopped, improving operator's security and it's not provoking economics losses. The objective of this work is to build a set, using bivariate correlation and canonical correlation, which will be the set of input variables of an artificial neural network to monitor the greater number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. Initially, for the input set of neural network we selected the variables: nuclear power, primary circuit flow rate, control/safety rod position and difference in pressure in the core of the reactor, because almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The nuclear power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures; the primary circuit flow rate has the function of energy transport by removing the nucleus heat. An artificial neural network was trained and the results were satisfactory since the IEA-R1 Data Acquisition System reactor monitors 64 variables and, with a set of 9 input variables resulting from the correlation analysis, it was possible to monitor 51 variables. (author)

  7. The canonical and grand canonical models for nuclear ...

    Indian Academy of Sciences (India)

    Many observables seen in intermediate energy heavy-ion collisions can be explained on the basis of statistical equilibrium. Calculations based on statistical equilibrium can be implemented in microcanonical ensemble, canonical ensemble or grand canonical ensemble. This paper deals with calculations with canonical ...

  8. Canonical Correlation Analysis Between Supply Chain Quality Management And Competitive Advantages

    Directory of Open Access Journals (Sweden)

    Chaghooshi Ahmad Jafarnejad

    2015-06-01

    Full Text Available Competitive environment of today’s organizations, more than ever, is extensive, and the major concern for managers is to preserve and promote the sustainable competitive advantage. Companies have an obligation to improve their product quality and have extensive and close cooperation with other companies involved in the supply chain of products. Supply chain quality management (SCQM is a systematic approach to improve the performance that integrates supply chain partners and uses the opportunity in the best way, establish linkages between upstream and downstream flows, and investigate on creating value and satisfaction of intermediaries and final customers. Furthermore, achieving competitive advantages enables an organization to create a remarkable position in market and differentiate itself from competitors. This paper aims to understand the relationships between SCQM and competitive advantage. Sixty-eight experts of 25 companies in Sahami Alyaf (SA supply chain has been participated in this research. The research method used for this article is descriptive correlation. To assess the relationships between the criteria, canonical correlation analysis was used. The result shows that the SCQM and competitive advantages have a meaningful relationship. It also shows that most important variable in the linear combination of SCQM and competitive advantages are “customer focus and quality,” respectively.

  9. Personality disorders in substance abusers: Validation of the DIP-Q through principal components factor analysis and canonical correlation analysis

    Directory of Open Access Journals (Sweden)

    Hesse Morten

    2005-05-01

    Full Text Available Abstract Background Personality disorders are common in substance abusers. Self-report questionnaires that can aid in the assessment of personality disorders are commonly used in assessment, but are rarely validated. Methods The Danish DIP-Q as a measure of co-morbid personality disorders in substance abusers was validated through principal components factor analysis and canonical correlation analysis. A 4 components structure was constructed based on 238 protocols, representing antagonism, neuroticism, introversion and conscientiousness. The structure was compared with (a a 4-factor solution from the DIP-Q in a sample of Swedish drug and alcohol abusers (N = 133, and (b a consensus 4-components solution based on a meta-analysis of published correlation matrices of dimensional personality disorder scales. Results It was found that the 4-factor model of personality was congruent across the Danish and Swedish samples, and showed good congruence with the consensus model. A canonical correlation analysis was conducted on a subset of the Danish sample with staff ratings of pathology. Three factors that correlated highly between the two variable sets were found. These variables were highly similar to the three first factors from the principal components analysis, antagonism, neuroticism and introversion. Conclusion The findings support the validity of the DIP-Q as a measure of DSM-IV personality disorders in substance abusers.

  10. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  11. Canonical Correlational Models of Students’ Perceptions of Assessment Tasks, Motivational Orientations, and Learning Strategies

    Directory of Open Access Journals (Sweden)

    Hussain Alkharusi

    2013-01-01

    Full Text Available The present study aims at deriving correlational models of students' perceptions of assessment tasks, motivational orientations, and learning strategies using canonical analyses. Data were collected from 198 Omani tenth grade students. Results showed that high degrees of authenticity and transparency in assessment were associated with positive students' self-efficacy and task value. Also, high degrees of authenticity, transparency, and diversity in assessment were associated with a strong reliance on deep learning strategies; whereas a high degree of congruence with planned learning and a low degree of authenticity were associated with more reliance on surface learning strategies. Implications for classroom assessment practice and research were discussed.

  12. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  13. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  14. Relating covariant and canonical approaches to triangulated models of quantum gravity

    International Nuclear Information System (INIS)

    Arnsdorf, Matthias

    2002-01-01

    In this paper we explore the relation between covariant and canonical approaches to quantum gravity and BF theory. We will focus on the dynamical triangulation and spin-foam models, which have in common that they can be defined in terms of sums over spacetime triangulations. Our aim is to show how we can recover these covariant models from a canonical framework by providing two regularizations of the projector onto the kernel of the Hamiltonian constraint. This link is important for the understanding of the dynamics of quantum gravity. In particular, we will see how in the simplest dynamical triangulation model we can recover the Hamiltonian constraint via our definition of the projector. Our discussion of spin-foam models will show how the elementary spin-network moves in loop quantum gravity, which were originally assumed to describe the Hamiltonian constraint action, are in fact related to the time-evolution generated by the constraint. We also show that the Immirzi parameter is important for the understanding of a continuum limit of the theory

  15. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  16. The integrated model of sport confidence: a canonical correlation and mediational analysis.

    Science.gov (United States)

    Koehn, Stefan; Pearce, Alan J; Morris, Tony

    2013-12-01

    The main purpose of the study was to examine crucial parts of Vealey's (2001) integrated framework hypothesizing that sport confidence is a mediating variable between sources of sport confidence (including achievement, self-regulation, and social climate) and athletes' affect in competition. The sample consisted of 386 athletes, who completed the Sources of Sport Confidence Questionnaire, Trait Sport Confidence Inventory, and Dispositional Flow Scale-2. Canonical correlation analysis revealed a confidence-achievement dimension underlying flow. Bias-corrected bootstrap confidence intervals in AMOS 20.0 were used in examining mediation effects between source domains and dispositional flow. Results showed that sport confidence partially mediated the relationship between achievement and self-regulation domains and flow, whereas no significant mediation was found for social climate. On a subscale level, full mediation models emerged for achievement and flow dimensions of challenge-skills balance, clear goals, and concentration on the task at hand.

  17. Structure-constrained sparse canonical correlation analysis with an application to microbiome data analysis.

    Science.gov (United States)

    Chen, Jun; Bushman, Frederic D; Lewis, James D; Wu, Gary D; Li, Hongzhe

    2013-04-01

    Motivated by studying the association between nutrient intake and human gut microbiome composition, we developed a method for structure-constrained sparse canonical correlation analysis (ssCCA) in a high-dimensional setting. ssCCA takes into account the phylogenetic relationships among bacteria, which provides important prior knowledge on evolutionary relationships among bacterial taxa. Our ssCCA formulation utilizes a phylogenetic structure-constrained penalty function to impose certain smoothness on the linear coefficients according to the phylogenetic relationships among the taxa. An efficient coordinate descent algorithm is developed for optimization. A human gut microbiome data set is used to illustrate this method. Both simulations and real data applications show that ssCCA performs better than the standard sparse CCA in identifying meaningful variables when there are structures in the data.

  18. Traveltime sensitivity kernels for wave equation tomography using the unwrapped phase

    KAUST Repository

    Djebbi, Ramzi

    2014-02-18

    Wave equation tomography attempts to improve on traveltime tomography, by better adhering to the requirements of our finite-frequency data. Conventional wave equation tomography, based on the first-order Born approximation followed by cross-correlation traveltime lag measurement, or on the Rytov approximation for the phase, yields the popular hollow banana sensitivity kernel indicating that the measured traveltime at a point is insensitive to perturbations along the ray theoretical path at certain finite frequencies. Using the instantaneous traveltime, which is able to unwrap the phase of the signal, instead of the cross-correlation lag, we derive new finite-frequency traveltime sensitivity kernels. The kernel reflects more the model-data dependency, we typically encounter in full waveform inversion. This result confirms that the hollow banana shape is borne of the cross-correlation lag measurement, which exposes the Born approximations weakness in representing transmitted waves. The instantaneous traveltime can thus mitigate the additional component of nonlinearity introduced by the hollow banana sensitivity kernels in finite-frequency traveltime tomography. The instantaneous traveltime simply represents the unwrapped phase of Rytov approximation, and thus is a good alternative to Born and Rytov to compute the misfit function for wave equation tomography. We show the limitations of the cross-correlation associated with Born approximation for traveltime lag measurement when the source signatures of the measured and modelled data are different. The instantaneous traveltime is proven to be less sensitive to the distortions in the data signature. The unwrapped phase full banana shape of the sensitivity kernels shows smoother update compared to the banana–doughnut kernels. The measurement of the traveltime delay caused by a small spherical anomaly, embedded into a 3-D homogeneous model, supports the full banana sensitivity assertion for the unwrapped phase.

  19. Traveltime sensitivity kernels for wave equation tomography using the unwrapped phase

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2014-01-01

    Wave equation tomography attempts to improve on traveltime tomography, by better adhering to the requirements of our finite-frequency data. Conventional wave equation tomography, based on the first-order Born approximation followed by cross-correlation traveltime lag measurement, or on the Rytov approximation for the phase, yields the popular hollow banana sensitivity kernel indicating that the measured traveltime at a point is insensitive to perturbations along the ray theoretical path at certain finite frequencies. Using the instantaneous traveltime, which is able to unwrap the phase of the signal, instead of the cross-correlation lag, we derive new finite-frequency traveltime sensitivity kernels. The kernel reflects more the model-data dependency, we typically encounter in full waveform inversion. This result confirms that the hollow banana shape is borne of the cross-correlation lag measurement, which exposes the Born approximations weakness in representing transmitted waves. The instantaneous traveltime can thus mitigate the additional component of nonlinearity introduced by the hollow banana sensitivity kernels in finite-frequency traveltime tomography. The instantaneous traveltime simply represents the unwrapped phase of Rytov approximation, and thus is a good alternative to Born and Rytov to compute the misfit function for wave equation tomography. We show the limitations of the cross-correlation associated with Born approximation for traveltime lag measurement when the source signatures of the measured and modelled data are different. The instantaneous traveltime is proven to be less sensitive to the distortions in the data signature. The unwrapped phase full banana shape of the sensitivity kernels shows smoother update compared to the banana–doughnut kernels. The measurement of the traveltime delay caused by a small spherical anomaly, embedded into a 3-D homogeneous model, supports the full banana sensitivity assertion for the unwrapped phase.

  20. Filter bank canonical correlation analysis for implementing a high-speed SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Xiaogang; Wang, Yijun; Gao, Shangkai; Jung, Tzyy-Ping; Gao, Xiaorong

    2015-08-01

    Objective. Recently, canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) due to its high efficiency, robustness, and simple implementation. However, a method with which to make use of harmonic SSVEP components to enhance the CCA-based frequency detection has not been well established. Approach. This study proposed a filter bank canonical correlation analysis (FBCCA) method to incorporate fundamental and harmonic frequency components to improve the detection of SSVEPs. A 40-target BCI speller based on frequency coding (frequency range: 8-15.8 Hz, frequency interval: 0.2 Hz) was used for performance evaluation. To optimize the filter bank design, three methods (M1: sub-bands with equally spaced bandwidths; M2: sub-bands corresponding to individual harmonic frequency bands; M3: sub-bands covering multiple harmonic frequency bands) were proposed for comparison. Classification accuracy and information transfer rate (ITR) of the three FBCCA methods and the standard CCA method were estimated using an offline dataset from 12 subjects. Furthermore, an online BCI speller adopting the optimal FBCCA method was tested with a group of 10 subjects. Main results. The FBCCA methods significantly outperformed the standard CCA method. The method M3 achieved the highest classification performance. At a spelling rate of ˜33.3 characters/min, the online BCI speller obtained an average ITR of 151.18 ± 20.34 bits min-1. Significance. By incorporating the fundamental and harmonic SSVEP components in target identification, the proposed FBCCA method significantly improves the performance of the SSVEP-based BCI, and thereby facilitates its practical applications such as high-speed spelling.

  1. Validation of Born Traveltime Kernels

    Science.gov (United States)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  2. Kernel Multivariate Analysis Framework for Supervised Subspace Learning: A Tutorial on Linear and Kernel Multivariate Methods

    DEFF Research Database (Denmark)

    Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.

    2013-01-01

    correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...

  3. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  4. Analysis of input variables of an artificial neural network using bivariate correlation and canonical correlation

    International Nuclear Information System (INIS)

    Costa, Valter Magalhaes

    2011-01-01

    The monitoring of variables and diagnosis of sensor fault in nuclear power plants or processes industries is very important because an early diagnosis allows the correction of the fault and, like this, do not cause the production interruption, improving operator's security and it's not provoking economics losses. The objective of this work is, in the whole of all variables monitor of a nuclear power plant, to build a set, not necessary minimum, which will be the set of input variables of an artificial neural network and, like way, to monitor the biggest number of variables. This methodology was applied to the IEA-R1 Research Reactor at IPEN. For this, the variables Power, Rate of flow of primary circuit, Rod of control/security and Difference in pressure in the core of the reactor ( Δ P) was grouped, because, for hypothesis, almost whole of monitoring variables have relation with the variables early described or its effect can be result of the interaction of two or more. The Power is related to the increasing and decreasing of temperatures as well as the amount radiation due fission of the uranium; the Rods are controls of power and influence in the amount of radiation and increasing and decreasing of temperatures and the Rate of flow of primary circuit has function of the transport of energy by removing of heat of the nucleus Like this, labeling B= {Power, Rate of flow of Primary Circuit, Rod of Control/Security and Δ P} was computed the correlation between B and all another variables monitoring (coefficient of multiple correlation), that is, by the computer of the multiple correlation, that is tool of Theory of Canonical Correlations, was possible to computer how much the set B can predict each variable. Due the impossibility of a satisfactory approximation by B in the prediction of some variables, it was included one or more variables that have high correlation with this variable to improve the quality of prediction. In this work an artificial neural network

  5. Symmetric minimally entangled typical thermal states for canonical and grand-canonical ensembles

    Science.gov (United States)

    Binder, Moritz; Barthel, Thomas

    2017-05-01

    Based on the density matrix renormalization group (DMRG), strongly correlated quantum many-body systems at finite temperatures can be simulated by sampling over a certain class of pure matrix product states (MPS) called minimally entangled typical thermal states (METTS). When a system features symmetries, these can be utilized to substantially reduce MPS computation costs. It is conceptually straightforward to simulate canonical ensembles using symmetric METTS. In practice, it is important to alternate between different symmetric collapse bases to decrease autocorrelations in the Markov chain of METTS. To this purpose, we introduce symmetric Fourier and Haar-random block bases that are efficiently mixing. We also show how grand-canonical ensembles can be simulated efficiently with symmetric METTS. We demonstrate these approaches for spin-1 /2 X X Z chains and discuss how the choice of the collapse bases influences autocorrelations as well as the distribution of measurement values and, hence, convergence speeds.

  6. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  7. Rainfall prediction of Cimanuk watershed regions with canonical correlation analysis (CCA)

    Science.gov (United States)

    Rustiana, Shailla; Nurani Ruchjana, Budi; Setiawan Abdullah, Atje; Hermawan, Eddy; Berliana Sipayung, Sinta; Gede Nyoman Mindra Jaya, I.; Krismianto

    2017-10-01

    Rainfall prediction in Indonesia is very influential on various development sectors, such as agriculture, fisheries, water resources, industry, and other sectors. The inaccurate predictions can lead to negative effects. Cimanuk watershed is one of the main pillar of water resources in West Java. This watersheds divided into three parts, which is a headwater of Cimanuk sub-watershed, Middle of Cimanuk sub-watershed and downstream of Cimanuk sub- watershed. The flow of this watershed will flow through the Jatigede reservoir and will supply water to the north-coast area in the next few years. So, the reliable model of rainfall prediction is very needed in this watershed. Rainfall prediction conducted with Canonical Correlation Analysis (CCA) method using Climate Predictability Tool (CPT) software. The prediction is every 3months on 2016 (after January) based on Climate Hazards group Infrared Precipitation with Stations (CHIRPS) data over West Java. Predictors used in CPT were the monthly data index of Nino3.4, Dipole Mode (DMI), and Monsoon Index (AUSMI-ISMI-WNPMI-WYMI) with initial condition January. The initial condition is chosen by the last data update. While, the predictant were monthly rainfall data CHIRPS region of West Java. The results of prediction rainfall showed by skill map from Pearson Correlation. High correlation of skill map are on MAM (Mar-Apr-May), AMJ (Apr-May-Jun), and JJA (Jun-Jul-Aug) which means the model is reliable to forecast rainfall distribution over Cimanuk watersheds region (over West Java) on those seasons. CCA score over those season prediction mostly over 0.7. The accuracy of the model CPT also indicated by the Relative Operating Characteristic (ROC) curve of the results of Pearson correlation 3 representative point of sub-watershed (Sumedang, Majalengka, and Cirebon), were mostly located in the top line of non-skill, and evidenced by the same of rainfall patterns between observation and forecast. So, the model of CPT with CCA method

  8. Quasi-Dual-Packed-Kerneled Au49 (2,4-DMBT)27 Nanoclusters and the Influence of Kernel Packing on the Electrochemical Gap.

    Science.gov (United States)

    Liao, Lingwen; Zhuang, Shengli; Wang, Pu; Xu, Yanan; Yan, Nan; Dong, Hongwei; Wang, Chengming; Zhao, Yan; Xia, Nan; Li, Jin; Deng, Haiteng; Pei, Yong; Tian, Shi-Kai; Wu, Zhikun

    2017-10-02

    Although face-centered cubic (fcc), body-centered cubic (bcc), hexagonal close-packed (hcp), and other structured gold nanoclusters have been reported, it was unclear whether gold nanoclusters with mix-packed (fcc and non-fcc) kernels exist, and the correlation between kernel packing and the properties of gold nanoclusters is unknown. A Au 49 (2,4-DMBT) 27 nanocluster with a shell electron count of 22 has now been been synthesized and structurally resolved by single-crystal X-ray crystallography, which revealed that Au 49 (2,4-DMBT) 27 contains a unique Au 34 kernel consisting of one quasi-fcc-structured Au 21 and one non-fcc-structured Au 13 unit (where 2,4-DMBTH=2,4-dimethylbenzenethiol). Further experiments revealed that the kernel packing greatly influences the electrochemical gap (EG) and the fcc structure has a larger EG than the investigated non-fcc structure. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Generalized Canonical Time Warping.

    Science.gov (United States)

    Zhou, Feng; De la Torre, Fernando

    2016-02-01

    Temporal alignment of human motion has been of recent interest due to its applications in animation, tele-rehabilitation and activity recognition. This paper presents generalized canonical time warping (GCTW), an extension of dynamic time warping (DTW) and canonical correlation analysis (CCA) for temporally aligning multi-modal sequences from multiple subjects performing similar activities. GCTW extends previous work on DTW and CCA in several ways: (1) it combines CCA with DTW to align multi-modal data (e.g., video and motion capture data); (2) it extends DTW by using a linear combination of monotonic functions to represent the warping path, providing a more flexible temporal warp. Unlike exact DTW, which has quadratic complexity, we propose a linear time algorithm to minimize GCTW. (3) GCTW allows simultaneous alignment of multiple sequences. Experimental results on aligning multi-modal data, facial expressions, motion capture data and video illustrate the benefits of GCTW. The code is available at http://humansensing.cs.cmu.edu/ctw.

  11. Canonical analysis of sentinel-1 radar and sentinel-2 optical data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Larsen, Rasmus

    2017-01-01

    This paper gives results from joint analyses of dual polarimety synthetic aperture radar data from the Sentinel-1 mission and optical data from the Sentinel-2 mission. The analyses are carried out by means of traditional canonical correlation analysis (CCA) and canonical information analysis (CIA......). Where CCA is based on maximising correlation between linear combinations of the two data sets, CIA maximises mutual information between the two. CIA is a conceptually more pleasing method for the analysis of data with very different modalities such as radar and optical data. Although a little...

  12. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  13. A canonical correlation analysis-based dynamic bayesian network prior to infer gene regulatory networks from multiple types of biological data.

    Science.gov (United States)

    Baur, Brittany; Bozdag, Serdar

    2015-04-01

    One of the challenging and important computational problems in systems biology is to infer gene regulatory networks (GRNs) of biological systems. Several methods that exploit gene expression data have been developed to tackle this problem. In this study, we propose the use of copy number and DNA methylation data to infer GRNs. We developed an algorithm that scores regulatory interactions between genes based on canonical correlation analysis. In this algorithm, copy number or DNA methylation variables are treated as potential regulator variables, and expression variables are treated as potential target variables. We first validated that the canonical correlation analysis method is able to infer true interactions in high accuracy. We showed that the use of DNA methylation or copy number datasets leads to improved inference over steady-state expression. Our results also showed that epigenetic and structural information could be used to infer directionality of regulatory interactions. Additional improvements in GRN inference can be gleaned from incorporating the result in an informative prior in a dynamic Bayesian algorithm. This is the first study that incorporates copy number and DNA methylation into an informative prior in dynamic Bayesian framework. By closely examining top-scoring interactions with different sources of epigenetic or structural information, we also identified potential novel regulatory interactions.

  14. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  15. Linked-cluster formulation of electron-hole interaction kernel in real-space representation without using unoccupied states.

    Science.gov (United States)

    Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam

    2018-05-21

    Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results

  16. El canon literario peruano

    Directory of Open Access Journals (Sweden)

    Carlos García-Bedoya Maguiña

    2011-05-01

    Full Text Available Canon es un concepto clave en la historia literaria. En el presente artículo,se revisa la evolución histórica del canon literario peruano. Es solo con la llamada República Aristocrática, en las primeras décadas del siglo XX, que cabe hablar en el caso peruano de la formación de un auténtico canon nacional. El autor denomina a esta primera versión del canon literario peruano como canon oligárquico y destaca la importancia de la obra de Riva Agüero y de Ventura García Calderón en su configuración. Es solo más tarde, desde los años 20 y de modo definitivo desde los años 50, que puede hablarse de la emergencia de un nuevo canon literarioal que el autor propone determinar canon posoligárquico.

  17. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi

    2013-08-19

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it\\'s kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  18. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2013-01-01

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it's kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  19. Human health implications of extreme precipitation events and water quality in California, USA: a canonical correlation analysis

    Directory of Open Access Journals (Sweden)

    Alexander Gershunov, PhD

    2018-05-01

    Full Text Available Background: Pathogens and pollutants collect on the land surface or in infrastructure between strong rainfall episodes and are delivered via storm runoff to areas of human exposure, such as coastal recreational waters. In California, USA, precipitation events are projected to become more extreme and simultaneously decrease in frequency as storm tracks move poleward due to polar-amplified global warming. Precipitation extremes in California are dominated by atmospheric rivers, which carry more moisture in warmer climates. Thus, the physical driver of extreme precipitation events is expected to grow stronger with climate change, and pollutant accumulation and runoff-generated exposure to those pollutants are expected to increase, particularly after prolonged dry spells. Microbiological contamination of coastal waters during winter storms exposes human populations to elevated concentrations of microorganisms such as faecal bacteria, which could cause gastrointestinal and ear infections, and lead to exposure to pathogens causing life-threatening conditions, such as hepatitis A. The aim of this study was to quantitatively assess the effect of precipitation on coastal water quality in California. Methods: We used a recently published catalogue of atmospheric rivers, in combination with historical daily precipitation data and levels of three indicators of faecal bacteria (total and faecal coliforms, and Escherichia coli detected at roughly 500 monitoring locations in coastal waters along California's 840-mile coastline, to explore weekly associations between extreme precipitation events, particularly those related to atmospheric rivers, and the variability in water quality during 2003–09. We identified ten principal components (together explaining >90% of the variability in precipitation and faecal bacteria time-series to reduce the dimensionality of the datasets. We then performed canonical correlation analysis of the principal components to

  20. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  1. Whose Canon? Culturalization versus Democratization

    Directory of Open Access Journals (Sweden)

    Erling Bjurström

    2012-06-01

    Full Text Available Current accounts – and particularly the critique – of canon formation are primarily based on some form of identity politics. In the 20th century a representational model of social identities replaced cultivation as the primary means to democratize the canons of the fine arts. In a parallel development, the discourse on canons has shifted its focus from processes of inclusion to those of exclusion. This shift corresponds, on the one hand, to the construction of so-called alternative canons or counter-canons, and, on the other hand, to attempts to restore the authority of canons considered to be in a state of crisis or decaying. Regardless of the democratic stance of these efforts, the construction of alternatives or the reestablishment of decaying canons does not seem to achieve their aims, since they break with the explicit and implicit rules of canon formation. Politically motivated attempts to revise or restore a specific canon make the workings of canon formation too visible, transparent and calculated, thereby breaking the spell of its imaginary character. Retracing the history of the canonization of the fine arts reveals that it was originally tied to the disembedding of artists and artworks from social and worldly affairs, whereas debates about canons of the fine arts since the end of the 20th century are heavily dependent on their social, cultural and historical reembedding. The latter has the character of disenchantment, but has also fettered the canon debate in notions of “our” versus “their” culture. However, by emphasizing the dedifferentiation of contemporary processes of culturalization, the advancing canonization of popular culture seems to be able to break with identity politics that foster notions of “our” culture in the present thinking on canons, and push it in a more transgressive, syncretic or hybrid direction.

  2. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    Science.gov (United States)

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  3. Canonical correlation analysis of factors involved in the occurrence of peptic ulcers.

    Science.gov (United States)

    Bayyurt, Nizamettin; Abasiyanik, M Fatih; Sander, Ersan; Salih, Barik A

    2007-01-01

    The impact of risk factors on the development of peptic ulcers has been shown to vary among different populations. We sought to establish a correlation between these factors and their involvement in the occurrence of peptic ulcers for which a canonical correlation analysis was applied. We included 7,014 patient records (48.6% women, 18.4% duodenal ulcer [DU], 4.6% gastric ulcer [GU]) of those underwent upper gastroendoscopy for the last 5 years. The variables measured are endoscopic findings (DU, GU, antral gastritis, erosive gastritis, pangastritis, pyloric deformity, bulbar deformity, bleeding, atrophy, Barret esophagus and gastric polyp) and risk factors (age, gender, Helicobacter pylori infection, smoking, alcohol, and nonsteroidal anti-inflammatory drugs [NSAIDs] and aspirin intake). We found that DU had significant positive correlation with bulbar deformity (P=2.6 x 10(-23)), pyloric deformity (P=2.6 x 10(-23)), gender (P=2.6 x 10(-23)), H. pylori (P=1.4 x 10(-15)), bleeding (P=6.9 x 10(-15)), smoking (P=1.4 x 10(-7)), aspirin use (P=1.1 x 10(-4)), alcohol intake (P=7.7 x 10(-4)), and NSAIDs (P=.01). GU had a significantly positive correlation with pyloric deformity (P=1,6 x 10(-15)), age (P=2.6 x 10(-14)), bleeding (P=3.7 x 10(-8)), gender (P=1.3 x 10(-7)), aspirin use (P=1.1 x 10(-6)), bulbar deformity (P=7.4 x 10(-4)), alcohol intake (P=.03), smoking (P=.04), and Barret esophagus (P=.03). The level of significance was much higher in some variables with DU than with GU and the correlations with GU in spite of being highly significant the majority, were small in magnitude. In conclusion, Turkish patients with the following endoscopic findings bulbar deformity and pyloric deformity are high-risk patients for peptic ulcers with the risk of the occurrence of DU being higher than that of GU. Factors such as H. pylori, smoking, alcohol use, and NSAIDs use (listed in a decreasing manner) are risk factors that have significant impact on the occurrence of DU

  4. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  5. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  6. Normalizing computed tomography data reconstructed with different filter kernels: effect on emphysema quantification

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo-Estrella, Leticia; Prokop, Mathias [Radboud University Nijmegen Medical Center, Geert Grooteplein 10 (route 767), P.O. Box 9101, Nijmegen (766) (Netherlands); Lynch, David A.; Stinson, Douglas; Zach, Jordan [National Jewish Health, Denver, CO (United States); Judy, Philip F. [Brigham and Women' s Hospital, Boston, MA (United States); Ginneken, Bram van; Rikxoort, Eva M. van [Radboud University Nijmegen Medical Center, Geert Grooteplein 10 (route 767), P.O. Box 9101, Nijmegen (766) (Netherlands); Fraunhofer MEVIS, Bremen (Germany)

    2016-02-15

    To propose and evaluate a method to reduce variability in emphysema quantification among different computed tomography (CT) reconstructions by normalizing CT data reconstructed with varying kernels. We included 369 subjects from the COPDGene study. For each subject, spirometry and a chest CT reconstructed with two kernels were obtained using two different scanners. Normalization was performed by frequency band decomposition with hierarchical unsharp masking to standardize the energy in each band to a reference value. Emphysema scores (ES), the percentage of lung voxels below -950 HU, were computed before and after normalization. Bland-Altman analysis and correlation between ES and spirometry before and after normalization were compared. Two mixed cohorts, containing data from all scanners and kernels, were created to simulate heterogeneous acquisition parameters. The average difference in ES between kernels decreased for the scans obtained with both scanners after normalization (7.7 ± 2.7 to 0.3 ± 0.7; 7.2 ± 3.8 to -0.1 ± 0.5). Correlation coefficients between ES and FEV{sub 1}, and FEV{sub 1}/FVC increased significantly for the mixed cohorts. Normalization of chest CT data reduces variation in emphysema quantification due to reconstruction filters and improves correlation between ES and spirometry. (orig.)

  7. Normalizing computed tomography data reconstructed with different filter kernels: effect on emphysema quantification

    International Nuclear Information System (INIS)

    Gallardo-Estrella, Leticia; Prokop, Mathias; Lynch, David A.; Stinson, Douglas; Zach, Jordan; Judy, Philip F.; Ginneken, Bram van; Rikxoort, Eva M. van

    2016-01-01

    To propose and evaluate a method to reduce variability in emphysema quantification among different computed tomography (CT) reconstructions by normalizing CT data reconstructed with varying kernels. We included 369 subjects from the COPDGene study. For each subject, spirometry and a chest CT reconstructed with two kernels were obtained using two different scanners. Normalization was performed by frequency band decomposition with hierarchical unsharp masking to standardize the energy in each band to a reference value. Emphysema scores (ES), the percentage of lung voxels below -950 HU, were computed before and after normalization. Bland-Altman analysis and correlation between ES and spirometry before and after normalization were compared. Two mixed cohorts, containing data from all scanners and kernels, were created to simulate heterogeneous acquisition parameters. The average difference in ES between kernels decreased for the scans obtained with both scanners after normalization (7.7 ± 2.7 to 0.3 ± 0.7; 7.2 ± 3.8 to -0.1 ± 0.5). Correlation coefficients between ES and FEV 1 , and FEV 1 /FVC increased significantly for the mixed cohorts. Normalization of chest CT data reduces variation in emphysema quantification due to reconstruction filters and improves correlation between ES and spirometry. (orig.)

  8. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    Science.gov (United States)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  9. Phenolic compounds and antioxidant activity of kernels and shells of Mexican pecan (Carya illinoinensis).

    Science.gov (United States)

    de la Rosa, Laura A; Alvarez-Parrilla, Emilio; Shahidi, Fereidoon

    2011-01-12

    The phenolic composition and antioxidant activity of pecan kernels and shells cultivated in three regions of the state of Chihuahua, Mexico, were analyzed. High concentrations of total extractable phenolics, flavonoids, and proanthocyanidins were found in kernels, and 5-20-fold higher concentrations were found in shells. Their concentrations were significantly affected by the growing region. Antioxidant activity was evaluated by ORAC, DPPH•, HO•, and ABTS•-- scavenging (TAC) methods. Antioxidant activity was strongly correlated with the concentrations of phenolic compounds. A strong correlation existed among the results obtained using these four methods. Five individual phenolic compounds were positively identified and quantified in kernels: ellagic, gallic, protocatechuic, and p-hydroxybenzoic acids and catechin. Only ellagic and gallic acids could be identified in shells. Seven phenolic compounds were tentatively identified in kernels by means of MS and UV spectral comparison, namely, protocatechuic aldehyde, (epi)gallocatechin, one gallic acid-glucose conjugate, three ellagic acid derivatives, and valoneic acid dilactone.

  10. The dark sector from interacting canonical and non-canonical scalar fields

    International Nuclear Information System (INIS)

    De Souza, Rudinei C; Kremer, Gilberto M

    2010-01-01

    In this work general models with interactions between two canonical scalar fields and between one non-canonical (tachyon type) and one canonical scalar field are investigated. The potentials and couplings to the gravity are selected through the Noether symmetry approach. These general models are employed to describe interactions between dark energy and dark matter, with the fields being constrained by the astronomical data. The cosmological solutions of some cases are compared with the observed evolution of the late Universe.

  11. Bi-directional gene set enrichment and canonical correlation analysis identify key diet-sensitive pathways and biomarkers of metabolic syndrome

    Directory of Open Access Journals (Sweden)

    Gaora Peadar Ó

    2010-10-01

    Full Text Available Abstract Background Currently, a number of bioinformatics methods are available to generate appropriate lists of genes from a microarray experiment. While these lists represent an accurate primary analysis of the data, fewer options exist to contextualise those lists. The development and validation of such methods is crucial to the wider application of microarray technology in the clinical setting. Two key challenges in clinical bioinformatics involve appropriate statistical modelling of dynamic transcriptomic changes, and extraction of clinically relevant meaning from very large datasets. Results Here, we apply an approach to gene set enrichment analysis that allows for detection of bi-directional enrichment within a gene set. Furthermore, we apply canonical correlation analysis and Fisher's exact test, using plasma marker data with known clinical relevance to aid identification of the most important gene and pathway changes in our transcriptomic dataset. After a 28-day dietary intervention with high-CLA beef, a range of plasma markers indicated a marked improvement in the metabolic health of genetically obese mice. Tissue transcriptomic profiles indicated that the effects were most dramatic in liver (1270 genes significantly changed; p Conclusion Bi-directional gene set enrichment analysis more accurately reflects dynamic regulatory behaviour in biochemical pathways, and as such highlighted biologically relevant changes that were not detected using a traditional approach. In such cases where transcriptomic response to treatment is exceptionally large, canonical correlation analysis in conjunction with Fisher's exact test highlights the subset of pathways showing strongest correlation with the clinical markers of interest. In this case, we have identified selenoamino acid metabolism and steroid biosynthesis as key pathways mediating the observed relationship between metabolic health and high-CLA beef. These results indicate that this type of

  12. Oblique rotaton in canonical correlation analysis reformulated as maximizing the generalized coefficient of determination.

    Science.gov (United States)

    Satomura, Hironori; Adachi, Kohei

    2013-07-01

    To facilitate the interpretation of canonical correlation analysis (CCA) solutions, procedures have been proposed in which CCA solutions are orthogonally rotated to a simple structure. In this paper, we consider oblique rotation for CCA to provide solutions that are much easier to interpret, though only orthogonal rotation is allowed in the existing formulations of CCA. Our task is thus to reformulate CCA so that its solutions have the freedom of oblique rotation. Such a task can be achieved using Yanai's (Jpn. J. Behaviormetrics 1:46-54, 1974; J. Jpn. Stat. Soc. 11:43-53, 1981) generalized coefficient of determination for the objective function to be maximized in CCA. The resulting solutions are proved to include the existing orthogonal ones as special cases and to be rotated obliquely without affecting the objective function value, where ten Berge's (Psychometrika 48:519-523, 1983) theorems on suborthonormal matrices are used. A real data example demonstrates that the proposed oblique rotation can provide simple, easily interpreted CCA solutions.

  13. Data-driven fault detection for industrial processes canonical correlation analysis and projection based methods

    CERN Document Server

    Chen, Zhiwen

    2017-01-01

    Zhiwen Chen aims to develop advanced fault detection (FD) methods for the monitoring of industrial processes. With the ever increasing demands on reliability and safety in industrial processes, fault detection has become an important issue. Although the model-based fault detection theory has been well studied in the past decades, its applications are limited to large-scale industrial processes because it is difficult to build accurate models. Furthermore, motivated by the limitations of existing data-driven FD methods, novel canonical correlation analysis (CCA) and projection-based methods are proposed from the perspectives of process input and output data, less engineering effort and wide application scope. For performance evaluation of FD methods, a new index is also developed. Contents A New Index for Performance Evaluation of FD Methods CCA-based FD Method for the Monitoring of Stationary Processes Projection-based FD Method for the Monitoring of Dynamic Processes Benchmark Study and Real-Time Implementat...

  14. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  15. Both canonical and non-canonical Wnt signaling independently promote stem cell growth in mammospheres.

    Directory of Open Access Journals (Sweden)

    Alexander M Many

    Full Text Available The characterization of mammary stem cells, and signals that regulate their behavior, is of central importance in understanding developmental changes in the mammary gland and possibly for targeting stem-like cells in breast cancer. The canonical Wnt/β-catenin pathway is a signaling mechanism associated with maintenance of self-renewing stem cells in many tissues, including mammary epithelium, and can be oncogenic when deregulated. Wnt1 and Wnt3a are examples of ligands that activate the canonical pathway. Other Wnt ligands, such as Wnt5a, typically signal via non-canonical, β-catenin-independent, pathways that in some cases can antagonize canonical signaling. Since the role of non-canonical Wnt signaling in stem cell regulation is not well characterized, we set out to investigate this using mammosphere formation assays that reflect and quantify stem cell properties. Ex vivo mammosphere cultures were established from both wild-type and Wnt1 transgenic mice and were analyzed in response to manipulation of both canonical and non-canonical Wnt signaling. An increased level of mammosphere formation was observed in cultures derived from MMTV-Wnt1 versus wild-type animals, and this was blocked by treatment with Dkk1, a selective inhibitor of canonical Wnt signaling. Consistent with this, we found that a single dose of recombinant Wnt3a was sufficient to increase mammosphere formation in wild-type cultures. Surprisingly, we found that Wnt5a also increased mammosphere formation in these assays. We confirmed that this was not caused by an increase in canonical Wnt/β-catenin signaling but was instead mediated by non-canonical Wnt signals requiring the receptor tyrosine kinase Ror2 and activity of the Jun N-terminal kinase, JNK. We conclude that both canonical and non-canonical Wnt signals have positive effects promoting stem cell activity in mammosphere assays and that they do so via independent signaling mechanisms.

  16. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  17. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  18. Three dimensional canonical transformations

    International Nuclear Information System (INIS)

    Tegmen, A.

    2010-01-01

    A generic construction of canonical transformations is given in three-dimensional phase spaces on which Nambu bracket is imposed. First, the canonical transformations are defined as based on cannonade transformations. Second, it is shown that determination of the generating functions and the transformation itself for given generating function is possible by solving correspondent Pfaffian differential equations. Generating functions of type are introduced and all of them are listed. Infinitesimal canonical transformations are also discussed as the complementary subject. Finally, it is shown that decomposition of canonical transformations is also possible in three-dimensional phase spaces as in the usual two-dimensional ones.

  19. Statistical hadronization and hadronic micro-canonical ensemble II

    International Nuclear Information System (INIS)

    Becattini, F.; Ferroni, L.

    2004-01-01

    We present a Monte Carlo calculation of the micro-canonical ensemble of the ideal hadron-resonance gas including all known states up to a mass of about 1.8 GeV and full quantum statistics. The micro-canonical average multiplicities of the various hadron species are found to converge to the canonical ones for moderately low values of the total energy, around 8 GeV, thus bearing out previous analyses of hadronic multiplicities in the canonical ensemble. The main numerical computing method is an importance sampling Monte Carlo algorithm using the product of Poisson distributions to generate multi-hadronic channels. It is shown that the use of this multi-Poisson distribution allows for an efficient and fast computation of averages, which can be further improved in the limit of very large clusters. We have also studied the fitness of a previously proposed computing method, based on the Metropolis Monte Carlo algorithm, for event generation in the statistical hadronization model. We find that the use of the multi-Poisson distribution as proposal matrix dramatically improves the computation performance. However, due to the correlation of subsequent samples, this method proves to be generally less robust and effective than the importance sampling method. (orig.)

  20. Canonical methods in classical and quantum gravity: An invitation to canonical LQG

    Science.gov (United States)

    Reyes, Juan D.

    2018-04-01

    Loop Quantum Gravity (LQG) is a candidate quantum theory of gravity still under construction. LQG was originally conceived as a background independent canonical quantization of Einstein’s general relativity theory. This contribution provides some physical motivations and an overview of some mathematical tools employed in canonical Loop Quantum Gravity. First, Hamiltonian classical methods are reviewed from a geometric perspective. Canonical Dirac quantization of general gauge systems is sketched next. The Hamiltonian formultation of gravity in geometric ADM and connection-triad variables is then presented to finally lay down the canonical loop quantization program. The presentation is geared toward advanced undergradute or graduate students in physics and/or non-specialists curious about LQG.

  1. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  2. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  3. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  4. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  5. The Smoothing Artifact of Spatially Constrained Canonical Correlation Analysis in Functional MRI

    Directory of Open Access Journals (Sweden)

    Dietmar Cordes

    2012-01-01

    Full Text Available A wide range of studies show the capacity of multivariate statistical methods for fMRI to improve mapping of brain activations in a noisy environment. An advanced method uses local canonical correlation analysis (CCA to encompass a group of neighboring voxels instead of looking at the single voxel time course. The value of a suitable test statistic is used as a measure of activation. It is customary to assign the value to the center voxel; however, this is a choice of convenience and without constraints introduces artifacts, especially in regions of strong localized activation. To compensate for these deficiencies, different spatial constraints in CCA have been introduced to enforce dominance of the center voxel. However, even if the dominance condition for the center voxel is satisfied, constrained CCA can still lead to a smoothing artifact, often called the “bleeding artifact of CCA”, in fMRI activation patterns. In this paper a new method is introduced to measure and correct for the smoothing artifact for constrained CCA methods. It is shown that constrained CCA methods corrected for the smoothing artifact lead to more plausible activation patterns in fMRI as shown using data from a motor task and a memory task.

  6. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  7. Depth-time interpolation of feature trends extracted from mobile microelectrode data with kernel functions.

    Science.gov (United States)

    Wong, Stephen; Hargreaves, Eric L; Baltuch, Gordon H; Jaggi, Jurg L; Danish, Shabbar F

    2012-01-01

    Microelectrode recording (MER) is necessary for precision localization of target structures such as the subthalamic nucleus during deep brain stimulation (DBS) surgery. Attempts to automate this process have produced quantitative temporal trends (feature activity vs. time) extracted from mobile MER data. Our goal was to evaluate computational methods of generating spatial profiles (feature activity vs. depth) from temporal trends that would decouple automated MER localization from the clinical procedure and enhance functional localization in DBS surgery. We evaluated two methods of interpolation (standard vs. kernel) that generated spatial profiles from temporal trends. We compared interpolated spatial profiles to true spatial profiles that were calculated with depth windows, using correlation coefficient analysis. Excellent approximation of true spatial profiles is achieved by interpolation. Kernel-interpolated spatial profiles produced superior correlation coefficient values at optimal kernel widths (r = 0.932-0.940) compared to standard interpolation (r = 0.891). The choice of kernel function and kernel width resulted in trade-offs in smoothing and resolution. Interpolation of feature activity to create spatial profiles from temporal trends is accurate and can standardize and facilitate MER functional localization of subcortical structures. The methods are computationally efficient, enhancing localization without imposing additional constraints on the MER clinical procedure during DBS surgery. Copyright © 2012 S. Karger AG, Basel.

  8. Grand canonical simulations of hard-disk systems by simulated tempering

    DEFF Research Database (Denmark)

    Döge, G.; Mecke, K.; Møller, Jesper

    2004-01-01

    The melting transition of hard disks in two dimensions is still an unsolved problem and improved simulation algorithms may be helpful for its investigation. We suggest the application of simulating tempering for grand canonical hard-disk systems as an efficient alternative to the commonly......-used Monte Carlo algorithms for canonical systems. This approach allows the direct study of the packing fraction as a function of the chemical potential even in the vicinity of the melting transition. Furthermore, estimates of several spatial characteristics including pair correlation function are studied...

  9. Adiabatic-connection fluctuation-dissipation DFT for the structural properties of solids—The renormalized ALDA and electron gas kernels

    Energy Technology Data Exchange (ETDEWEB)

    Patrick, Christopher E., E-mail: chripa@fysik.dtu.dk; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk [Center for Atomic-Scale Materials Design (CAMD), Department of Physics, Technical University of Denmark, DK—2800 Kongens Lyngby (Denmark)

    2015-09-14

    We present calculations of the correlation energies of crystalline solids and isolated systems within the adiabatic-connection fluctuation-dissipation formulation of density-functional theory. We perform a quantitative comparison of a set of model exchange-correlation kernels originally derived for the homogeneous electron gas (HEG), including the recently introduced renormalized adiabatic local-density approximation (rALDA) and also kernels which (a) satisfy known exact limits of the HEG, (b) carry a frequency dependence, or (c) display a 1/k{sup 2} divergence for small wavevectors. After generalizing the kernels to inhomogeneous systems through a reciprocal-space averaging procedure, we calculate the lattice constants and bulk moduli of a test set of 10 solids consisting of tetrahedrally bonded semiconductors (C, Si, SiC), ionic compounds (MgO, LiCl, LiF), and metals (Al, Na, Cu, Pd). We also consider the atomization energy of the H{sub 2} molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA’s tendency to overestimate the magnitude of the correlation energy whilst maintaining a high-accuracy description of structural properties.

  10. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  11. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  12. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  13. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  14. Quantum canonical ensemble: A projection operator approach

    Science.gov (United States)

    Magnus, Wim; Lemmens, Lucien; Brosens, Fons

    2017-09-01

    Knowing the exact number of particles N, and taking this knowledge into account, the quantum canonical ensemble imposes a constraint on the occupation number operators. The constraint particularly hampers the systematic calculation of the partition function and any relevant thermodynamic expectation value for arbitrary but fixed N. On the other hand, fixing only the average number of particles, one may remove the above constraint and simply factorize the traces in Fock space into traces over single-particle states. As is well known, that would be the strategy of the grand-canonical ensemble which, however, comes with an additional Lagrange multiplier to impose the average number of particles. The appearance of this multiplier can be avoided by invoking a projection operator that enables a constraint-free computation of the partition function and its derived quantities in the canonical ensemble, at the price of an angular or contour integration. Introduced in the recent past to handle various issues related to particle-number projected statistics, the projection operator approach proves beneficial to a wide variety of problems in condensed matter physics for which the canonical ensemble offers a natural and appropriate environment. In this light, we present a systematic treatment of the canonical ensemble that embeds the projection operator into the formalism of second quantization while explicitly fixing N, the very number of particles rather than the average. Being applicable to both bosonic and fermionic systems in arbitrary dimensions, transparent integral representations are provided for the partition function ZN and the Helmholtz free energy FN as well as for two- and four-point correlation functions. The chemical potential is not a Lagrange multiplier regulating the average particle number but can be extracted from FN+1 -FN, as illustrated for a two-dimensional fermion gas.

  15. Fumonisins in corn: correlation with Fusarium sp. count, damaged kernels, protein and lipid content

    Directory of Open Access Journals (Sweden)

    Elisabete Yurie Sataque Ono

    2006-01-01

    Full Text Available Natural fungal and fumonisin contamination were evaluated in 109 freshly harvested corn samples from Paraná State and correlated to damaged kernels (%. In addition, healthy and damaged kernels of 24 corn samples were selected in order to compare the mycoflora profile and fumonisin levels. The correlation among protein/lipid content and fumonisin levels was also analyzed in the 15 most frequently cultivated corn hybrids. Total fungal colony count in 109 freshly harvested corn samples ranged from 1.9x10(4 to 3.5x10(6 CFU/g, Fusarium sp. count from 1.0x10³ to 2.2x10(6 CFU/g, and fumonisin levels from 0.13 to 20.38 µg/g. Total fungal colony/Fusarium sp. count and fumonisin levels showed positive correlation (p A contaminação natural por fungos e fumonisinas foi avaliada em 109 amostras de milho recém-colhido do Estado do Paraná e correlacionada com grãos ardidos (%. Além disso, grãos sadios e ardidos de 24 amostras de milho foram selecionados a fim de comparar o perfil da microbiota fúngica e níveis de fumonisinas. A correlação entre os teores de proteínas/lipídios e os níveis de fumonisinas também foi analisada nos 15 híbridos de milho mais freqüentemente cultivados no Estado do Paraná. A contagem total de fungos em 109 amostras de milho recém-colhido variou de 1,9x10(4 a 3,5x10(6 UFC/g, Fusarium sp. de 1,0x10³ a 2,2x10(6 UFC/g e, níveis de fumonisinas de 0,13 a 20,38 µg/g. A contagem total de fungos/Fusarium spp. e níveis de fumonisinas apresentaram correlação positiva (p<0,05. Adicionalmente, houve uma correlação positiva entre grãos ardidos (% e a contagem total de fungos/ Fusarium spp. (p < 0,05. Os níveis de fumonisinas nos grãos sadios variaram de 0,57 a 20,38 µg/g, enquanto que nos grãos ardidos variaram de 68,96 a 336,38 µg/g. Não foi observada correlação significativa entre os níveis de fumonisinas e os teores de proteínas/lipídios. Esses resultados ratificam a importância do monitoramento

  16. A Non-Local, Energy-Optimized Kernel: Recovering Second-Order Exchange and Beyond in Extended Systems

    Science.gov (United States)

    Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn

    The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.

  17. Canonical transformations of Kepler trajectories

    International Nuclear Information System (INIS)

    Mostowski, Jan

    2010-01-01

    In this paper, canonical transformations generated by constants of motion in the case of the Kepler problem are discussed. It is shown that canonical transformations generated by angular momentum are rotations of the trajectory. Particular attention is paid to canonical transformations generated by the Runge-Lenz vector. It is shown that these transformations change the eccentricity of the orbit. A method of obtaining elliptic trajectories from the circular ones with the help of canonical trajectories is discussed.

  18. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  19. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  20. Canonical quantum gravity and consistent discretizations

    Indian Academy of Sciences (India)

    Abstract. This paper covers some developments in canonical quantum gravity that ... derstanding the real Ashtekar variables four dimensionally [4], or the recent work ... Traditionally, canonical formulations of general relativity considered as canonical variables the metric on a spatial slice qab and a canonically conjugate.

  1. Uncertainty relations, zero point energy and the linear canonical group

    Science.gov (United States)

    Sudarshan, E. C. G.

    1993-01-01

    The close relationship between the zero point energy, the uncertainty relations, coherent states, squeezed states, and correlated states for one mode is investigated. This group-theoretic perspective enables the parametrization and identification of their multimode generalization. In particular the generalized Schroedinger-Robertson uncertainty relations are analyzed. An elementary method of determining the canonical structure of the generalized correlated states is presented.

  2. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  3. Kernel Tuning and Nonuniform Influence on Optical and Electrochemical Gaps of Bimetal Nanoclusters.

    Science.gov (United States)

    He, Lizhong; Yuan, Jinyun; Xia, Nan; Liao, Lingwen; Liu, Xu; Gan, Zibao; Wang, Chengming; Yang, Jinlong; Wu, Zhikun

    2018-03-14

    Fine tuning nanoparticles with atomic precision is exciting and challenging and is critical for tuning the properties, understanding the structure-property correlation and determining the practical applications of nanoparticles. Some ultrasmall thiolated metal nanoparticles (metal nanoclusters) have been shown to be precisely doped, and even the protecting staple metal atom could be precisely reduced. However, the precise addition or reduction of the kernel atom while the other metal atoms in the nanocluster remain the same has not been successful until now, to the best of our knowledge. Here, by carefully selecting the protecting ligand with adequate steric hindrance, we synthesized a novel nanocluster in which the kernel can be regarded as that formed by the addition of two silver atoms to both ends of the Pt@Ag 12 icosohedral kernel of the Ag 24 Pt(SR) 18 (SR: thiolate) nanocluster, as revealed by single crystal X-ray crystallography. Interestingly, compared with the previously reported Ag 24 Pt(SR) 18 nanocluster, the as-obtained novel bimetal nanocluster exhibits a similar absorption but a different electrochemical gap. One possible explanation for this result is that the kernel tuning does not essentially change the electronic structure, but obviously influences the charge on the Pt@Ag 12 kernel, as demonstrated by natural population analysis, thus possibly resulting in the large electrochemical gap difference between the two nanoclusters. This work not only provides a novel strategy to tune metal nanoclusters but also reveals that the kernel change does not necessarily alter the optical and electrochemical gaps in a uniform manner, which has important implications for the structure-property correlation of nanoparticles.

  4. Pollen source effects on growth of kernel structures and embryo chemical compounds in maize.

    Science.gov (United States)

    Tanaka, W; Mantese, A I; Maddonni, G A

    2009-08-01

    Previous studies have reported effects of pollen source on the oil concentration of maize (Zea mays) kernels through modifications to both the embryo/kernel ratio and embryo oil concentration. The present study expands upon previous analyses by addressing pollen source effects on the growth of kernel structures (i.e. pericarp, endosperm and embryo), allocation of embryo chemical constituents (i.e. oil, protein, starch and soluble sugars), and the anatomy and histology of the embryos. Maize kernels with different oil concentration were obtained from pollinations with two parental genotypes of contrasting oil concentration. The dynamics of the growth of kernel structures and allocation of embryo chemical constituents were analysed during the post-flowering period. Mature kernels were dissected to study the anatomy (embryonic axis and scutellum) and histology [cell number and cell size of the scutellums, presence of sub-cellular structures in scutellum tissue (starch granules, oil and protein bodies)] of the embryos. Plants of all crosses exhibited a similar kernel number and kernel weight. Pollen source modified neither the growth period of kernel structures, nor pericarp growth rate. By contrast, pollen source determined a trade-off between embryo and endosperm growth rates, which impacted on the embryo/kernel ratio of mature kernels. Modifications to the embryo size were mediated by scutellum cell number. Pollen source also affected (P embryo chemical compounds. Negative correlations among embryo oil concentration and those of starch (r = 0.98, P embryos with low oil concentration had an increased (P embryo/kernel ratio and allocation of embryo chemicals seems to be related to the early established sink strength (i.e. sink size and sink activity) of the embryos.

  5. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  6. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  7. Mapping QTLs controlling kernel dimensions in a wheat inter-varietal RIL mapping population.

    Science.gov (United States)

    Cheng, Ruiru; Kong, Zhongxin; Zhang, Liwei; Xie, Quan; Jia, Haiyan; Yu, Dong; Huang, Yulong; Ma, Zhengqiang

    2017-07-01

    Seven kernel dimension QTLs were identified in wheat, and kernel thickness was found to be the most important dimension for grain weight improvement. Kernel morphology and weight of wheat (Triticum aestivum L.) affect both yield and quality; however, the genetic basis of these traits and their interactions has not been fully understood. In this study, to investigate the genetic factors affecting kernel morphology and the association of kernel morphology traits with kernel weight, kernel length (KL), width (KW) and thickness (KT) were evaluated, together with hundred-grain weight (HGW), in a recombinant inbred line population derived from Nanda2419 × Wangshuibai, with data from five trials (two different locations over 3 years). The results showed that HGW was more closely correlated with KT and KW than with KL. A whole genome scan revealed four QTLs for KL, one for KW and two for KT, distributed on five different chromosomes. Of them, QKl.nau-2D for KL, and QKt.nau-4B and QKt.nau-5A for KT were newly identified major QTLs for the respective traits, explaining up to 32.6 and 41.5% of the phenotypic variations, respectively. Increase of KW and KT and reduction of KL/KT and KW/KT ratios always resulted in significant higher grain weight. Lines combining the Nanda 2419 alleles of the 4B and 5A intervals had wider, thicker, rounder kernels and a 14% higher grain weight in the genotype-based analysis. A strong, negative linear relationship of the KW/KT ratio with grain weight was observed. It thus appears that kernel thickness is the most important kernel dimension factor in wheat improvement for higher yield. Mapping and marker identification of the kernel dimension-related QTLs definitely help realize the breeding goals.

  8. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  9. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  10. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  11. Identification of associations between genotypes and longitudinal phenotypes via temporally-constrained group sparse canonical correlation analysis.

    Science.gov (United States)

    Hao, Xiaoke; Li, Chanxiu; Yan, Jingwen; Yao, Xiaohui; Risacher, Shannon L; Saykin, Andrew J; Shen, Li; Zhang, Daoqiang

    2017-07-15

    Neuroimaging genetics identifies the relationships between genetic variants (i.e., the single nucleotide polymorphisms) and brain imaging data to reveal the associations from genotypes to phenotypes. So far, most existing machine-learning approaches are widely used to detect the effective associations between genetic variants and brain imaging data at one time-point. However, those associations are based on static phenotypes and ignore the temporal dynamics of the phenotypical changes. The phenotypes across multiple time-points may exhibit temporal patterns that can be used to facilitate the understanding of the degenerative process. In this article, we propose a novel temporally constrained group sparse canonical correlation analysis (TGSCCA) framework to identify genetic associations with longitudinal phenotypic markers. The proposed TGSCCA method is able to capture the temporal changes in brain from longitudinal phenotypes by incorporating the fused penalty, which requires that the differences between two consecutive canonical weight vectors from adjacent time-points should be small. A new efficient optimization algorithm is designed to solve the objective function. Furthermore, we demonstrate the effectiveness of our algorithm on both synthetic and real data (i.e., the Alzheimer's Disease Neuroimaging Initiative cohort, including progressive mild cognitive impairment, stable MCI and Normal Control participants). In comparison with conventional SCCA, our proposed method can achieve strong associations and discover phenotypic biomarkers across multiple time-points to guide disease-progressive interpretation. The Matlab code is available at https://sourceforge.net/projects/ibrain-cn/files/ . dqzhang@nuaa.edu.cn or shenli@iu.edu. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com

  12. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  13. Correlated Topic Vector for Scene Classification.

    Science.gov (United States)

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  14. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  15. Analytic scattering kernels for neutron thermalization studies

    International Nuclear Information System (INIS)

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  16. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  17. Real-Time EEG Signal Enhancement Using Canonical Correlation Analysis and Gaussian Mixture Clustering

    Directory of Open Access Journals (Sweden)

    Chin-Teng Lin

    2018-01-01

    Full Text Available Electroencephalogram (EEG signals are usually contaminated with various artifacts, such as signal associated with muscle activity, eye movement, and body motion, which have a noncerebral origin. The amplitude of such artifacts is larger than that of the electrical activity of the brain, so they mask the cortical signals of interest, resulting in biased analysis and interpretation. Several blind source separation methods have been developed to remove artifacts from the EEG recordings. However, the iterative process for measuring separation within multichannel recordings is computationally intractable. Moreover, manually excluding the artifact components requires a time-consuming offline process. This work proposes a real-time artifact removal algorithm that is based on canonical correlation analysis (CCA, feature extraction, and the Gaussian mixture model (GMM to improve the quality of EEG signals. The CCA was used to decompose EEG signals into components followed by feature extraction to extract representative features and GMM to cluster these features into groups to recognize and remove artifacts. The feasibility of the proposed algorithm was demonstrated by effectively removing artifacts caused by blinks, head/body movement, and chewing from EEG recordings while preserving the temporal and spectral characteristics of the signals that are important to cognitive research.

  18. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  19. Survival rate of Plodia interpunctella (Lepidoptera: Pyralidae: On different states of wheat and rye kernels previously infested by beetle pests

    Directory of Open Access Journals (Sweden)

    Vukajlović Filip N.

    2017-01-01

    Full Text Available The present study was undertaken to determine survival rate of Plodia interpunctella (Hübner, 1813, reared on different mechanical states of Vizija winter wheat cultivar and Raša winter rye cultivar, previously infested with different beetle pests. Wheat was previously infested with Rhyzopertha dominica, Sitophilus granarius, Oryzaephilus surinamensis and Cryptolestes ferrugineus, while rye was infested only with O. surinamensis. Kernels were tested in three different mechanical states: (A whole undamaged kernels; (B kernels already damaged by pests and (C original storage kernels (mixture of B and C type. No P. interpunctella adult emerged on wheat kernels, while 36 adults developed on rye kernels. The highest abundance reached beetle species who fed with a mixture of kernels damaged by pests and whole undamaged kernels. Development and survival rate of five different storage insect pests depends on type of kernels and there exist significant survivorship correlations among them.

  20. Analysis of heterosis and quantitative trait loci for kernel shape related traits using triple testcross population in maize.

    Directory of Open Access Journals (Sweden)

    Lu Jiang

    Full Text Available Kernel shape related traits (KSRTs have been shown to have important influences on grain yield. The previous studies that emphasize kernel length (KL and kernel width (KW lack a comprehensive evaluation of characters affecting kernel shape. In this study, materials of the basic generations (B73, Mo17, and B73 × Mo17, 82 intermated B73 × Mo17 (IBM individuals, and the corresponding triple testcross (TTC populations were used to evaluate heterosis, investigate correlations, and characterize the quantitative trait loci (QTL for six KSRTs: KL, KW, length to width ratio (LWR, perimeter length (PL, kernel area (KA, and circularity (CS. The results showed that the mid-parent heterosis (MPH for most of the KSRTs was moderate. The performance of KL, KW, PL, and KA exhibited significant positive correlation with heterozygosity but their Pearson's R values were low. Among KSRTs, the strongest significant correlation was found between PL and KA with R values was up to 0.964. In addition, KW, PL, KA, and CS were shown to be significant positive correlation with 100-kernel weight (HKW. 28 QTLs were detected for KSRTs in which nine were augmented additive, 13 were augmented dominant, and six were dominance × additive epistatic. The contribution of a single QTL to total phenotypic variation ranged from 2.1% to 32.9%. Furthermore, 19 additive × additive digenic epistatic interactions were detected for all KSRTs with the highest total R2 for KW (78.8%, and nine dominance × dominance digenic epistatic interactions detected for KL, LWR, and CS with the highest total R2 (55.3%. Among significant digenic interactions, most occurred between genomic regions not mapped with main-effect QTLs. These findings display the complexity of the genetic basis for KSRTs and enhance our understanding on heterosis of KSRTs from the quantitative genetic perspective.

  1. Canonical vs. micro-canonical sampling methods in a 2D Ising model

    International Nuclear Information System (INIS)

    Kepner, J.

    1990-12-01

    Canonical and micro-canonical Monte Carlo algorithms were implemented on a 2D Ising model. Expressions for the internal energy, U, inverse temperature, Z, and specific heat, C, are given. These quantities were calculated over a range of temperature, lattice sizes, and time steps. Both algorithms accurately simulate the Ising model. To obtain greater than three decimal accuracy from the micro-canonical method requires that the more complicated expression for Z be used. The overall difference between the algorithms is small. The physics of the problem under study should be the deciding factor in determining which algorithm to use. 13 refs., 6 figs., 2 tabs

  2. Titchmarsh-Weyl theory for canonical systems

    Directory of Open Access Journals (Sweden)

    Keshav Raj Acharya

    2014-11-01

    Full Text Available The main purpose of this paper is to develop Titchmarsh- Weyl theory of canonical systems. To this end, we first observe the fact that Schrodinger and Jacobi equations can be written into canonical systems. We then discuss the theory of Weyl m-function for canonical systems and establish the relation between the Weyl m-functions of Schrodinger equations and that of canonical systems which involve Schrodinger equations.

  3. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  4. Extending Local Canonical Correlation Analysis to Handle General Linear Contrasts for fMRI Data

    Directory of Open Access Journals (Sweden)

    Mingwu Jin

    2012-01-01

    Full Text Available Local canonical correlation analysis (CCA is a multivariate method that has been proposed to more accurately determine activation patterns in fMRI data. In its conventional formulation, CCA has several drawbacks that limit its usefulness in fMRI. A major drawback is that, unlike the general linear model (GLM, a test of general linear contrasts of the temporal regressors has not been incorporated into the CCA formalism. To overcome this drawback, a novel directional test statistic was derived using the equivalence of multivariate multiple regression (MVMR and CCA. This extension will allow CCA to be used for inference of general linear contrasts in more complicated fMRI designs without reparameterization of the design matrix and without reestimating the CCA solutions for each particular contrast of interest. With the proper constraints on the spatial coefficients of CCA, this test statistic can yield a more powerful test on the inference of evoked brain regional activations from noisy fMRI data than the conventional t-test in the GLM. The quantitative results from simulated and pseudoreal data and activation maps from fMRI data were used to demonstrate the advantage of this novel test statistic.

  5. Parallelization Experience with Four Canonical Econometric Models Using ParMitISEM

    Directory of Open Access Journals (Sweden)

    Nalan Baştürk

    2016-03-01

    Full Text Available This paper presents the parallel computing implementation of the MitISEM algorithm, labeled Parallel MitISEM. The basic MitISEM algorithm provides an automatic and flexible method to approximate a non-elliptical target density using adaptive mixtures of Student-t densities, where only a kernel of the target density is required. The approximation can be used as a candidate density in Importance Sampling or Metropolis Hastings methods for Bayesian inference on model parameters and probabilities. We present and discuss four canonical econometric models using a Graphics Processing Unit and a multi-core Central Processing Unit version of the MitISEM algorithm. The results show that the parallelization of the MitISEM algorithm on Graphics Processing Units and multi-core Central Processing Units is straightforward and fast to program using MATLAB. Moreover the speed performance of the Graphics Processing Unit version is much higher than the Central Processing Unit one.

  6. ‘Dancing through the Minefield’: Canon Reinstatement Strategies for Women Authors

    Directory of Open Access Journals (Sweden)

    Dascăl Reghina

    2015-12-01

    Full Text Available The paper explores the limiting and detrimental effects of biographical criticism and exceptionalism in the efforts of reinstating women authors into the Renaissance canon, by looking into the literary merits of Elizabeth Cary’s The Tragedy of Mariam, The Fair Queen of Jewry and The History of The Life, Reign and Death of Edward II. Whereas the conflation of biography and fiction is a successful recipe for canonization and for the production of feminist icons, it renders the text impotent because of its resulting inability to compete with or to be seen in correlation and interplay with other contemporary texts.

  7. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  8. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  9. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  10. Correlation of iris biometrics and DNA

    DEFF Research Database (Denmark)

    Harder, Stine; Clemmensen, Line Katrine Harder; Dahl, Anders Bjorholm

    2013-01-01

    The presented work concerns prediction of complex human phenotypes from genotypes. We were interested in correlating iris color and texture with DNA. Our data consist of 212 eye images along with DNA: 32 single-nucleotide polymorphisms (SNPs). We used two types of biometrics to describe the eye...... images: One for iris color and one for iris texture. Both biometrics were high dimensional and a sparse principle component analysis (SPCA) reduced the dimensions and resulted in a representation of data with good interpretability. The correlations between the sparse principal components (SPCs......) and the 32 SNPs were found using a canonical correlation analysis (CCA). The result was a single significant canonical correlation (CC) for both biometrics. Each CC comprised two correlated canonical variables, consisting of a linear combination of SPCs and a linear combination of SNPs, respectively...

  11. Adiabatic-connection fluctuation-dissipation DFT for the structural properties of solids - The renormalized ALDA and electron gas kernels

    DEFF Research Database (Denmark)

    Patrick, Christopher E.; Thygesen, Kristian Sommer

    2015-01-01

    the atomization energy of the H2 molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA's tendency to overestimate the magnitude of the correlation energy...

  12. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  13. The gauge-invariant canonical energy-momentum tensor

    Science.gov (United States)

    Lorcé, Cédric

    2016-03-01

    The canonical energy-momentum tensor is often considered as a purely academic object because of its gauge dependence. However, it has recently been realized that canonical quantities can in fact be defined in a gauge-invariant way provided that strict locality is abandoned, the non-local aspect being dictacted in high-energy physics by the factorization theorems. Using the general techniques for the parametrization of non-local parton correlators, we provide for the first time a complete parametrization of the energy-momentum tensor (generalizing the purely local parametrizations of Ji and Bakker-Leader-Trueman used for the kinetic energy-momentum tensor) and identify explicitly the parts accessible from measurable two-parton distribution functions (TMDs and GPDs). As by-products, we confirm the absence of model-independent relations between TMDs and parton orbital angular momentum, recover in a much simpler way the Burkardt sum rule and derive three similar new sum rules expressing the conservation of transverse momentum.

  14. The gauge-invariant canonical energy-momentum tensor

    International Nuclear Information System (INIS)

    Lorce, C.

    2016-01-01

    The canonical energy-momentum tensor is often considered as a purely academic object because of its gauge dependence. However, it has recently been realized that canonical quantities can in fact be defined in a gauge-invariant way provided that strict locality is abandoned, the non-local aspect being dictated in high-energy physics by the factorization theorems. Using the general techniques for the parametrization of non-local parton correlators, we provide for the first time a complete parametrization of the energy-momentum tensor (generalizing the purely local parametrizations of Ji and Bakker-Leader-Trueman used for the kinetic energy-momentum tensor) and identify explicitly the parts accessible from measurable two-parton distribution functions (TMD and GPD). As by-products, we confirm the absence of model-independent relations between TMDs and parton orbital angular momentum, recover in a much simpler way the Burkardt sum rule and derive 3 similar new sum rules expressing the conservation of transverse momentum. (author)

  15. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  16. Multireference quantum chemistry through a joint density matrix renormalization group and canonical transformation theory.

    Science.gov (United States)

    Yanai, Takeshi; Kurashige, Yuki; Neuscamman, Eric; Chan, Garnet Kin-Lic

    2010-01-14

    We describe the joint application of the density matrix renormalization group and canonical transformation theory to multireference quantum chemistry. The density matrix renormalization group provides the ability to describe static correlation in large active spaces, while the canonical transformation theory provides a high-order description of the dynamic correlation effects. We demonstrate the joint theory in two benchmark systems designed to test the dynamic and static correlation capabilities of the methods, namely, (i) total correlation energies in long polyenes and (ii) the isomerization curve of the [Cu(2)O(2)](2+) core. The largest complete active spaces and atomic orbital basis sets treated by the joint DMRG-CT theory in these systems correspond to a (24e,24o) active space and 268 atomic orbitals in the polyenes and a (28e,32o) active space and 278 atomic orbitals in [Cu(2)O(2)](2+).

  17. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  18. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  19. Structured and Sparse Canonical Correlation Analysis as a Brain-Wide Multi-Modal Data Fusion Approach.

    Science.gov (United States)

    Mohammadi-Nejad, Ali-Reza; Hossein-Zadeh, Gholam-Ali; Soltanian-Zadeh, Hamid

    2017-07-01

    Multi-modal data fusion has recently emerged as a comprehensive neuroimaging analysis approach, which usually uses canonical correlation analysis (CCA). However, the current CCA-based fusion approaches face problems like high-dimensionality, multi-collinearity, unimodal feature selection, asymmetry, and loss of spatial information in reshaping the imaging data into vectors. This paper proposes a structured and sparse CCA (ssCCA) technique as a novel CCA method to overcome the above problems. To investigate the performance of the proposed algorithm, we have compared three data fusion techniques: standard CCA, regularized CCA, and ssCCA, and evaluated their ability to detect multi-modal data associations. We have used simulations to compare the performance of these approaches and probe the effects of non-negativity constraint, the dimensionality of features, sample size, and noise power. The results demonstrate that ssCCA outperforms the existing standard and regularized CCA-based fusion approaches. We have also applied the methods to real functional magnetic resonance imaging (fMRI) and structural MRI data of Alzheimer's disease (AD) patients (n = 34) and healthy control (HC) subjects (n = 42) from the ADNI database. The results illustrate that the proposed unsupervised technique differentiates the transition pattern between the subject-course of AD patients and HC subjects with a p-value of less than 1×10 -6 . Furthermore, we have depicted the brain mapping of functional areas that are most correlated with the anatomical changes in AD patients relative to HC subjects.

  20. Multiple Kernel Learning with Random Effects for Predicting Longitudinal Outcomes and Data Integration

    Science.gov (United States)

    Chen, Tianle; Zeng, Donglin

    2015-01-01

    Summary Predicting disease risk and progression is one of the main goals in many clinical research studies. Cohort studies on the natural history and etiology of chronic diseases span years and data are collected at multiple visits. Although kernel-based statistical learning methods are proven to be powerful for a wide range of disease prediction problems, these methods are only well studied for independent data but not for longitudinal data. It is thus important to develop time-sensitive prediction rules that make use of the longitudinal nature of the data. In this paper, we develop a novel statistical learning method for longitudinal data by introducing subject-specific short-term and long-term latent effects through a designed kernel to account for within-subject correlation of longitudinal measurements. Since the presence of multiple sources of data is increasingly common, we embed our method in a multiple kernel learning framework and propose a regularized multiple kernel statistical learning with random effects to construct effective nonparametric prediction rules. Our method allows easy integration of various heterogeneous data sources and takes advantage of correlation among longitudinal measures to increase prediction power. We use different kernels for each data source taking advantage of the distinctive feature of each data modality, and then optimally combine data across modalities. We apply the developed methods to two large epidemiological studies, one on Huntington's disease and the other on Alzheimer's Disease (Alzheimer's Disease Neuroimaging Initiative, ADNI) where we explore a unique opportunity to combine imaging and genetic data to study prediction of mild cognitive impairment, and show a substantial gain in performance while accounting for the longitudinal aspect of the data. PMID:26177419

  1. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  2. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  3. Derivation of Mayer Series from Canonical Ensemble

    International Nuclear Information System (INIS)

    Wang Xian-Zhi

    2016-01-01

    Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula. (paper)

  4. Derivation of Mayer Series from Canonical Ensemble

    Science.gov (United States)

    Wang, Xian-Zhi

    2016-02-01

    Mayer derived the Mayer series from both the canonical ensemble and the grand canonical ensemble by use of the cluster expansion method. In 2002, we conjectured a recursion formula of the canonical partition function of a fluid (X.Z. Wang, Phys. Rev. E 66 (2002) 056102). In this paper we give a proof for this formula by developing an appropriate expansion of the integrand of the canonical partition function. We further derive the Mayer series solely from the canonical ensemble by use of this recursion formula.

  5. Analysis of Linux kernel as a complex network

    International Nuclear Information System (INIS)

    Gao, Yichao; Zheng, Zheng; Qin, Fangyun

    2014-01-01

    Operating system (OS) acts as an intermediary between software and hardware in computer-based systems. In this paper, we analyze the core of the typical Linux OS, Linux kernel, as a complex network to investigate its underlying design principles. It is found that the Linux Kernel Network (LKN) is a directed network and its out-degree follows an exponential distribution while the in-degree follows a power-law distribution. The correlation between topology and functions is also explored, by which we find that LKN is a highly modularized network with 12 key communities. Moreover, we investigate the robustness of LKN under random failures and intentional attacks. The result shows that the failure of the large in-degree nodes providing basic services will do more damage on the whole system. Our work may shed some light on the design of complex software systems

  6. Evaluation of Biosynthesis, Accumulation and Antioxidant Activityof Vitamin E in Sweet Corn (Zea mays L. during Kernel Development

    Directory of Open Access Journals (Sweden)

    Lihua Xie

    2017-12-01

    Full Text Available Sweet corn kernels were used in this research to study the dynamics of vitamin E, by evaluatingthe expression levels of genes involved in vitamin E synthesis, the accumulation of vitamin E, and the antioxidant activity during the different stage of kernel development. Results showed that expression levels of ZmHPT and ZmTC genes increased, whereas ZmTMT gene dramatically decreased during kernel development. The contents of all the types of vitamin E in sweet corn had a significant upward increase during kernel development, and reached the highest level at 30 days after pollination (DAP. Amongst the eight isomers of vitamin E, the content of γ-tocotrienol was the highest, and increased by 14.9 folds, followed by α-tocopherolwith an increase of 22 folds, and thecontents of isomers γ-tocopherol, α-tocotrienol, δ-tocopherol,δ-tocotrienol, and β-tocopherol were also followed during kernel development. The antioxidant activity of sweet corn during kernel development was increased, and was up to 101.8 ± 22.3 μmol of α-tocopherol equivlent/100 g in fresh weight (FW at 30 DAP. There was a positive correlation between vitamin E contents and antioxidant activity in sweet corn during the kernel development, and a negative correlation between the expressions of ZmTMT gene and vitamin E contents. These results revealed the relations amongst the content of vitamin E isomers and the gene expression, vitamin E accumulation, and antioxidant activity. The study can provide a harvesting strategy for vitamin E bio-fortification in sweet corn.

  7. Evaluation of Biosynthesis, Accumulation and Antioxidant Activityof Vitamin E in Sweet Corn (Zea mays L.) during Kernel Development.

    Science.gov (United States)

    Xie, Lihua; Yu, Yongtao; Mao, Jihua; Liu, Haiying; Hu, Jian Guang; Li, Tong; Guo, Xinbo; Liu, Rui Hai

    2017-12-20

    Sweet corn kernels were used in this research to study the dynamics of vitamin E, by evaluatingthe expression levels of genes involved in vitamin E synthesis, the accumulation of vitamin E, and the antioxidant activity during the different stage of kernel development. Results showed that expression levels of Zm HPT and Zm TC genes increased, whereas Zm TMT gene dramatically decreased during kernel development. The contents of all the types of vitamin E in sweet corn had a significant upward increase during kernel development, and reached the highest level at 30 days after pollination (DAP). Amongst the eight isomers of vitamin E, the content of γ-tocotrienol was the highest, and increased by 14.9 folds, followed by α-tocopherolwith an increase of 22 folds, and thecontents of isomers γ-tocopherol, α-tocotrienol, δ-tocopherol,δ-tocotrienol, and β-tocopherol were also followed during kernel development. The antioxidant activity of sweet corn during kernel development was increased, and was up to 101.8 ± 22.3 μmol of α-tocopherol equivlent/100 g in fresh weight (FW) at 30 DAP. There was a positive correlation between vitamin E contents and antioxidant activity in sweet corn during the kernel development, and a negative correlation between the expressions of Zm TMT gene and vitamin E contents. These results revealed the relations amongst the content of vitamin E isomers and the gene expression, vitamin E accumulation, and antioxidant activity. The study can provide a harvesting strategy for vitamin E bio-fortification in sweet corn.

  8. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  9. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  10. A Unified Approach to Functional Principal Component Analysis and Functional Multiple-Set Canonical Correlation.

    Science.gov (United States)

    Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S

    2017-06-01

    Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.

  11. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  12. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  13. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  14. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  15. Auto-associative Kernel Regression Model with Weighted Distance Metric for Instrument Drift Monitoring

    International Nuclear Information System (INIS)

    Shin, Ho Cheol; Park, Moon Ghu; You, Skin

    2006-01-01

    Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression

  16. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  17. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  18. Quaternion Linear Canonical Transform Application

    OpenAIRE

    Bahri, Mawardi

    2015-01-01

    Quaternion linear canonical transform (QLCT) is a generalization of the classical linear canonical transfom (LCT) using quaternion algebra. The focus of this paper is to introduce an application of the QLCT to study of generalized swept-frequency filter

  19. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  20. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  1. Development of Cold Neutron Scattering Kernels for Advanced Moderators

    International Nuclear Information System (INIS)

    Granada, J. R.; Cantargi, F.

    2010-01-01

    The development of scattering kernels for a number of molecular systems was performed, including a set of hydrogeneous methylated aromatics such as toluene, mesitylene, and mixtures of those. In order to partially validate those new libraries, we compared predicted total cross sections with experimental data obtained in our laboratory. In addition, we have introduced a new model to describe the interaction of slow neutrons with solid methane in phase II (stable phase below T = 20.4 K, atmospheric pressure). Very recently, a new scattering kernel to describe the interaction of slow neutrons with solid Deuterium was also developed. The main dynamical characteristics of that system are contained in the formalism, the elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects.

  2. Improving Change Detection in Forest Areas Based on Stereo Panchromatic Imagery Using Kernel MNF

    DEFF Research Database (Denmark)

    Tian, Jiaojiao; Nielsen, Allan Aasbjerg; Reinartz, Peter

    2014-01-01

    with other unrelated phenomena, e.g., seasonal changes of land covers such as grass and crops. Therefore, we propose an approach that exploits kernel Minimum Noise Fraction (kMNF) to transform simple change features into high-dimensional feature space. Digital surface models (DSMs) generated from stereo...... imagery are used to provide information on height difference, which is additionally used to separate forest changes from other land-cover changes. With very few training samples, a change mask is generated with iterated canonical discriminant analysis (ICDA). Two examples are presented to illustrate...... the approach and demonstrate its efficiency. It is shown that with the same amount of training samples, the proposed method can obtain more accurate change masks compared with algorithms based on k-means, one-class support vector machine, and random forests....

  3. Towards TDDFT for Strongly Correlated Materials

    Directory of Open Access Journals (Sweden)

    Shree Ram Acharya

    2016-09-01

    Full Text Available We present some details of our recently-proposed Time-Dependent Density-Functional Theory (TDDFT for strongly-correlated materials in which the exchange-correlation (XC kernel is derived from the charge susceptibility obtained using Dynamical Mean-Field Theory (the TDDFT + DMFT approach. We proceed with deriving the expression for the XC kernel for the one-band Hubbard model by solving DMFT equations via two approaches, the Hirsch–Fye Quantum Monte Carlo (HF-QMC and an approximate low-cost perturbation theory approach, and demonstrate that the latter gives results that are comparable to the exact HF-QMC solution. Furthermore, through a variety of applications, we propose a simple analytical formula for the XC kernel. Additionally, we use the exact and approximate kernels to examine the nonhomogeneous ultrafast response of two systems: a one-band Hubbard model and a Mott insulator YTiO3. We show that the frequency dependence of the kernel, i.e., memory effects, is important for dynamics at the femtosecond timescale. We also conclude that strong correlations lead to the presence of beats in the time-dependent electric conductivity in YTiO3, a feature that could be tested experimentally and that could help validate the few approximations used in our formulation. We conclude by proposing an algorithm for the generalization of the theory to non-linear response.

  4. Canonical transformations and generating functionals

    NARCIS (Netherlands)

    Broer, L.J.F.; Kobussen, J.A.

    1972-01-01

    It is shown that canonical transformations for field variables in hamiltonian partial differential equations can be obtained from generating functionals in the same way as classical canonical transformations from generating functions. A simple proof of the relation between infinitesimal invariant

  5. Combination of canonical correlation analysis and empirical mode decomposition applied to denoising the labor electrohysterogram.

    Science.gov (United States)

    Hassan, Mahmoud; Boudaoud, Sofiane; Terrien, Jérémy; Karlsson, Brynjar; Marque, Catherine

    2011-09-01

    The electrohysterogram (EHG) is often corrupted by electronic and electromagnetic noise as well as movement artifacts, skeletal electromyogram, and ECGs from both mother and fetus. The interfering signals are sporadic and/or have spectra overlapping the spectra of the signals of interest rendering classical filtering ineffective. In the absence of efficient methods for denoising the monopolar EHG signal, bipolar methods are usually used. In this paper, we propose a novel combination of blind source separation using canonical correlation analysis (BSS_CCA) and empirical mode decomposition (EMD) methods to denoise monopolar EHG. We first extract the uterine bursts by using BSS_CCA then the biggest part of any residual noise is removed from the bursts by EMD. Our algorithm, called CCA_EMD, was compared with wavelet filtering and independent component analysis. We also compared CCA_EMD with the corresponding bipolar signals to demonstrate that the new method gives signals that have not been degraded by the new method. The proposed method successfully removed artifacts from the signal without altering the underlying uterine activity as observed by bipolar methods. The CCA_EMD algorithm performed considerably better than the comparison methods.

  6. Log canonical thresholds of smooth Fano threefolds

    International Nuclear Information System (INIS)

    Cheltsov, Ivan A; Shramov, Konstantin A

    2008-01-01

    The complex singularity exponent is a local invariant of a holomorphic function determined by the integrability of fractional powers of the function. The log canonical thresholds of effective Q-divisors on normal algebraic varieties are algebraic counterparts of complex singularity exponents. For a Fano variety, these invariants have global analogues. In the former case, it is the so-called α-invariant of Tian; in the latter case, it is the global log canonical threshold of the Fano variety, which is the infimum of log canonical thresholds of all effective Q-divisors numerically equivalent to the anticanonical divisor. An appendix to this paper contains a proof that the global log canonical threshold of a smooth Fano variety coincides with its α-invariant of Tian. The purpose of the paper is to compute the global log canonical thresholds of smooth Fano threefolds (altogether, there are 105 deformation families of such threefolds). The global log canonical thresholds are computed for every smooth threefold in 64 deformation families, and the global log canonical thresholds are computed for a general threefold in 20 deformation families. Some bounds for the global log canonical thresholds are computed for 14 deformation families. Appendix A is due to J.-P. Demailly.

  7. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  8. Classifying Linear Canonical Relations

    OpenAIRE

    Lorand, Jonathan

    2015-01-01

    In this Master's thesis, we consider the problem of classifying, up to conjugation by linear symplectomorphisms, linear canonical relations (lagrangian correspondences) from a finite-dimensional symplectic vector space to itself. We give an elementary introduction to the theory of linear canonical relations and present partial results toward the classification problem. This exposition should be accessible to undergraduate students with a basic familiarity with linear algebra.

  9. Are neoclassical canons valid for southern Chinese faces?

    Directory of Open Access Journals (Sweden)

    Yasas S N Jayaratne

    Full Text Available BACKGROUND: Proportions derived from neoclassical canons, initially described by Renaissance sculptors and painters, are still being employed as aesthetic guidelines during the clinical assessment of the facial morphology. OBJECTIVE: 1. to determine the applicability of neoclassical canons for Southern Chinese faces and 2. to explore gender differences in relation to the applicability of the neoclassical canons and their variants. METHODOLOGY: 3-D photographs acquired from 103 young adults (51 males and 52 females without facial dysmorphology were used to test applicability of four neoclassical canons. Standard anthropometric measurements that determine the facial canons were made on these 3-D images. The validity of the canons as well as their different variants were quantified. PRINCIPAL FINDINGS: The neoclassical cannons seldom applied to these individuals, and facial three-section and orbital canons did not apply at all. The orbitonasal canon was most frequently applicable, with a frequency of 19%. Significant sexual dimorphism was found relative to the prevalence of the variants of facial three-section and orbitonasal canons. CONCLUSION: The neoclassical canons did not appear to apply to our sample when rigorous quantitative measurements were employed. Thus, they should not be used as esthetic goals for craniofacial surgical interventions.

  10. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  11. Minimal canonical comprehensive Gröbner systems

    OpenAIRE

    Manubens, Montserrat; Montes, Antonio

    2009-01-01

    This is the continuation of Montes' paper "On the canonical discussion of polynomial systems with parameters''. In this paper, we define the Minimal Canonical Comprehensive Gröbner System of a parametric ideal and fix under which hypothesis it exists and is computable. An algorithm to obtain a canonical description of the segments of the Minimal Canonical CGS is given, thus completing the whole MCCGS algorithm (implemented in Maple and Singular). We show its high utility for applications, suc...

  12. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  13. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  14. Correlações canônicas de características agroindustriais em cana-deaçúcar = Canonical correlations of agro-industrial characteristics in sugarcane

    Directory of Open Access Journals (Sweden)

    José Wilson da Silva

    2007-07-01

    Full Text Available A análise de correlações canônicas mede a existência e a intensidade da associação entre dois grupos de variáveis ou caracteres de importância. Este trabalho teve como objetivo estimar a intensidade de associação entre os grupos de caracteres agronômicos e industriais em cana-de-açúcar. Pela análise de correlações canônicas, ficouevidenciado que clones com maior número de touceiras por parcela e maior número de colmos por touceira tendem a proporcionar um aumento na produção de cana (TCH, e para incrementar o rendimento de TCH, brix e a pol% devem ser selecionados clones baixos, com maior diâmetro, maior número de colmos por touceiras.The analysis of canonical correlations measures the existence and the intensity of the association between two groups of variables or characters of importance. This study aimed to estimate the intensity in the association between the agronomic and industrial characters in sugarcane. The analysis of canonic correlations allowed to conclude that clones with bigger number of stalks per parcel, greater number of stalks per stool tend to provide an increase in the TCH production. Another conclusion was that shorter clones with largerdiameter, greater number of stalks per stool and plants, are determinant in increasing TCH, brix and pol% characteristics.

  15. Abiotic stress growth conditions induce different responses in kernel iron concentration across genotypically distinct maize inbred varieties

    Science.gov (United States)

    Kandianis, Catherine B.; Michenfelder, Abigail S.; Simmons, Susan J.; Grusak, Michael A.; Stapleton, Ann E.

    2013-01-01

    The improvement of grain nutrient profiles for essential minerals and vitamins through breeding strategies is a target important for agricultural regions where nutrient poor crops like maize contribute a large proportion of the daily caloric intake. Kernel iron concentration in maize exhibits a broad range. However, the magnitude of genotype by environment (GxE) effects on this trait reduces the efficacy and predictability of selection programs, particularly when challenged with abiotic stress such as water and nitrogen limitations. Selection has also been limited by an inverse correlation between kernel iron concentration and the yield component of kernel size in target environments. Using 25 maize inbred lines for which extensive genome sequence data is publicly available, we evaluated the response of kernel iron density and kernel mass to water and nitrogen limitation in a managed field stress experiment using a factorial design. To further understand GxE interactions we used partition analysis to characterize response of kernel iron and weight to abiotic stressors among all genotypes, and observed two patterns: one characterized by higher kernel iron concentrations in control over stress conditions, and another with higher kernel iron concentration under drought and combined stress conditions. Breeding efforts for this nutritional trait could exploit these complementary responses through combinations of favorable allelic variation from these already well-characterized genetic stocks. PMID:24363659

  16. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  17. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  18. Canonical Labelling of Site Graphs

    Directory of Open Access Journals (Sweden)

    Nicolas Oury

    2013-06-01

    Full Text Available We investigate algorithms for canonical labelling of site graphs, i.e. graphs in which edges bind vertices on sites with locally unique names. We first show that the problem of canonical labelling of site graphs reduces to the problem of canonical labelling of graphs with edge colourings. We then present two canonical labelling algorithms based on edge enumeration, and a third based on an extension of Hopcroft's partition refinement algorithm. All run in quadratic worst case time individually. However, one of the edge enumeration algorithms runs in sub-quadratic time for graphs with "many" automorphisms, and the partition refinement algorithm runs in sub-quadratic time for graphs with "few" bisimulation equivalences. This suite of algorithms was chosen based on the expectation that graphs fall in one of those two categories. If that is the case, a combined algorithm runs in sub-quadratic worst case time. Whether this expectation is reasonable remains an interesting open problem.

  19. Removal of eye blink artifacts in wireless EEG sensor networks using reduced-bandwidth canonical correlation analysis.

    Science.gov (United States)

    Somers, Ben; Bertrand, Alexander

    2016-12-01

    Chronic, 24/7 EEG monitoring requires the use of highly miniaturized EEG modules, which only measure a few EEG channels over a small area. For improved spatial coverage, a wireless EEG sensor network (WESN) can be deployed, consisting of multiple EEG modules, which interact through short-distance wireless communication. In this paper, we aim to remove eye blink artifacts in each EEG channel of a WESN by optimally exploiting the correlation between EEG signals from different modules, under stringent communication bandwidth constraints. We apply a distributed canonical correlation analysis (CCA-)based algorithm, in which each module only transmits an optimal linear combination of its local EEG channels to the other modules. The method is validated on both synthetic and real EEG data sets, with emulated wireless transmissions. While strongly reducing the amount of data that is shared between nodes, we demonstrate that the algorithm achieves the same eye blink artifact removal performance as the equivalent centralized CCA algorithm, which is at least as good as other state-of-the-art multi-channel algorithms that require a transmission of all channels. Due to their potential for extreme miniaturization, WESNs are viewed as an enabling technology for chronic EEG monitoring. However, multi-channel analysis is hampered in WESNs due to the high energy cost for wireless communication. This paper shows that multi-channel eye blink artifact removal is possible with a significantly reduced wireless communication between EEG modules.

  20. A hybrid correlation analysis with application to imaging genetics

    Science.gov (United States)

    Hu, Wenxing; Fang, Jian; Calhoun, Vince D.; Wang, Yu-Ping

    2018-03-01

    Investigating the association between brain regions and genes continues to be a challenging topic in imaging genetics. Current brain region of interest (ROI)-gene association studies normally reduce data dimension by averaging the value of voxels in each ROI. This averaging may lead to a loss of information due to the existence of functional sub-regions. Pearson correlation is widely used for association analysis. However, it only detects linear correlation whereas nonlinear correlation may exist among ROIs. In this work, we introduced distance correlation to ROI-gene association analysis, which can detect both linear and nonlinear correlations and overcome the limitation of averaging operations by taking advantage of the information at each voxel. Nevertheless, distance correlation usually has a much lower value than Pearson correlation. To address this problem, we proposed a hybrid correlation analysis approach, by applying canonical correlation analysis (CCA) to the distance covariance matrix instead of directly computing distance correlation. Incorporating CCA into distance correlation approach may be more suitable for complex disease study because it can detect highly associated pairs of ROI and gene groups, and may improve the distance correlation level and statistical power. In addition, we developed a novel nonlinear CCA, called distance kernel CCA, which seeks the optimal combination of features with the most significant dependence. This approach was applied to imaging genetic data from the Philadelphia Neurodevelopmental Cohort (PNC). Experiments showed that our hybrid approach produced more consistent results than conventional CCA across resampling and both the correlation and statistical significance were increased compared to distance correlation analysis. Further gene enrichment analysis and region of interest (ROI) analysis confirmed the associations of the identified genes with brain ROIs. Therefore, our approach provides a powerful tool for finding

  1. Far-red fluorescent probes for canonical and non-canonical nucleic acid structures: current progress and future implications.

    Science.gov (United States)

    Suseela, Y V; Narayanaswamy, Nagarjun; Pratihar, Sumon; Govindaraju, Thimmaiah

    2018-02-05

    The structural diversity and functional relevance of nucleic acids (NAs), mainly deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), are indispensable for almost all living organisms, with minute aberrations in their structure and function becoming causative factors in numerous human diseases. The standard structures of NAs, termed canonical structures, are supported by Watson-Crick hydrogen bonding. Under special physiological conditions, NAs adopt distinct spatial organisations, giving rise to non-canonical conformations supported by hydrogen bonding other than the Watson-Crick type; such non-canonical structures have a definite function in controlling gene expression and are considered as novel diagnostic and therapeutic targets. Development of molecular probes for these canonical and non-canonical DNA/RNA structures has been an active field of research. Among the numerous probes studied, probes with turn-on fluorescence in the far-red (600-750 nm) region are highly sought-after due to minimal autofluorescence and cellular damage. Far-red fluorescent probes are vital for real-time imaging of NAs in live cells as they provide good resolution and minimal perturbation of the cell under investigation. In this review, we present recent advances in the area of far-red fluorescent probes of DNA/RNA and non-canonical G-quadruplex structures. For the sake of continuity and completeness, we provide a brief overview of visible fluorescent probes. Utmost importance is given to design criteria, characteristic properties and biological applications, including in cellulo imaging, apart from critical discussion on limitations of the far-red fluorescent probes. Finally, we offer current and future prospects in targeting canonical and non-canonical NAs specific to cellular organelles, through sequence- and conformation-specific far-red fluorescent probes. We also cover their implications in chemical and molecular biology, with particular focus on decoding various disease

  2. A class of kernel based real-time elastography algorithms.

    Science.gov (United States)

    Kibria, Md Golam; Hasan, Md Kamrul

    2015-08-01

    In this paper, a novel real-time kernel-based and gradient-based Phase Root Seeking (PRS) algorithm for ultrasound elastography is proposed. The signal-to-noise ratio of the strain image resulting from this method is improved by minimizing the cross-correlation discrepancy between the pre- and post-compression radio frequency signals with an adaptive temporal stretching method and employing built-in smoothing through an exponentially weighted neighborhood kernel in the displacement calculation. Unlike conventional PRS algorithms, displacement due to tissue compression is estimated from the root of the weighted average of the zero-lag cross-correlation phases of the pair of corresponding analytic pre- and post-compression windows in the neighborhood kernel. In addition to the proposed one, the other time- and frequency-domain elastography algorithms (Ara et al., 2013; Hussain et al., 2012; Hasan et al., 2012) proposed by our group are also implemented in real-time using Java where the computations are serially executed or parallely executed in multiple processors with efficient memory management. Simulation results using finite element modeling simulation phantom show that the proposed method significantly improves the strain image quality in terms of elastographic signal-to-noise ratio (SNRe), elastographic contrast-to-noise ratio (CNRe) and mean structural similarity (MSSIM) for strains as high as 4% as compared to other reported techniques in the literature. Strain images obtained for the experimental phantom as well as in vivo breast data of malignant or benign masses also show the efficacy of our proposed method over the other reported techniques in the literature. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  4. Mean propagation kernels for transport in correlated stochastic media at unresolved scales, illustration with a problem in atmospheric radiation

    International Nuclear Information System (INIS)

    Davis, A. B.

    2007-01-01

    A simple and effective framework is presented for modeling transport processes unfolding at computationally and/or observationally unresolved scales in scattering, absorbing and emitting media. The new approach acts directly on the spatial (i.e., propagation) part of the kernel in the integral formulation of the generic linear transport equation framed for stochastic media with a wide variety of spatial correlations, going far beyond the Markov-Poisson class used in the classic Pomraning-Levermore model. This statistical look at the extinction of un-collided particle beams takes us away from the standard exponential law of transmission. New transmission laws arise that are generally not exponential, often not even for asymptotically large jumps. This means that, from this perspective on random spatial variability, there is no 'effective medium' per se nor homogenization technique that can be used to describe the effects of unresolved fluctuations of the collision coefficient. However, one can still rewrite the transport equation, at least in its integral form, in a manner that looks like its counterpart for uniform media, but with a modified propagation kernel. Implementation in a Monte Carlo scheme is trivially simple and numerical results are presented that illustrate the bulk effect of the new parameterization for plane-parallel geometry. We survey time-domain diagnostics of solar radiative transfer in the Earth's cloudy atmosphere obtained recently from high-resolution ground-based spectroscopy, and it is shown that they are explained comprehensively by the new model. Finally, we discuss possible applications of this modeling framework in nuclear engineering. (authors)

  5. Canonical forms for single-qutrit Clifford+T operators

    OpenAIRE

    Glaudell, Andrew N.; Ross, Neil J.; Taylor, Jacob M.

    2018-01-01

    We introduce canonical forms for single qutrit Clifford+T circuits and prove that every single-qutrit Clifford+T operator admits a unique such canonical form. We show that our canonical forms are T-optimal in the sense that among all the single-qutrit Clifford+T circuits implementing a given operator our canonical form uses the least number of T gates. Finally, we provide an algorithm which inputs the description of an operator (as a matrix or a circuit) and constructs the canonical form for ...

  6. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  7. Canonical correlation analysis between collaborative networks and innovation: A case study in information technology companies in province of Tehran, Iran

    Directory of Open Access Journals (Sweden)

    Ahmad Jafar Nejad

    2013-07-01

    Full Text Available The increase competitions as well as technological advancements have created motivation among business owners to look for more innovative ideas from outside their organizations. Many enterprises collaborate with other organizations to empower themselves through innovative ideas. These kinds of collaborations can be observed as a concept called Regional Innovation System. These collaborations include inter-firm collaborations, research organizations, intermediary institutions and governmental agencies. The primary objective of this paper is to evaluate relationships between Collaborative Networks and Innovation in information technology business units located in province of Tehran, Iran. The research method utilized for the present study is descriptive-correlation. To evaluate the relationships between independent and dependent variables, canonical correlation analysis (CCA is used. The results confirm the previous findings regarding the relationship between Collaborative Networks and Innovation. Among various dimensions of Collaboration, Collaboration with governmental agencies had a very small impact on the relationship between collaboration networks and innovation. In addition, the results show that in addition to affecting product innovation and process innovation, collaboration networks also affected management innovation.

  8. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  10. Removal of the ballistocardiographic artifact from EEG-fMRI data: a canonical correlation approach

    International Nuclear Information System (INIS)

    Assecondi, Sara; Hallez, Hans; Staelens, Steven; Lemahieu, Ignace; Bianchi, Anna M; Huiskamp, Geertjan M

    2009-01-01

    The simultaneous recording of electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) can give new insights into how the brain functions. However, the strong electromagnetic field of the MR scanner generates artifacts that obscure the EEG and diminish its readability. Among them, the ballistocardiographic artifact (BCGa) that appears on the EEG is believed to be related to blood flow in scalp arteries leading to electrode movements. Average artifact subtraction (AAS) techniques, used to remove the BCGa, assume a deterministic nature of the artifact. This assumption may be too strong, considering the blood flow related nature of the phenomenon. In this work we propose a new method, based on canonical correlation analysis (CCA) and blind source separation (BSS) techniques, to reduce the BCGa from simultaneously recorded EEG-fMRI. We optimized the method to reduce the user's interaction to a minimum. When tested on six subjects, recorded in 1.5 T or 3 T, the average artifact extracted with BSS-CCA and AAS did not show significant differences, proving the absence of systematic errors. On the other hand, when compared on the basis of intra-subject variability, we found significant differences and better performance of the proposed method with respect to AAS. We demonstrated that our method deals with the intrinsic subject variability specific to the artifact that may cause averaging techniques to fail.

  11. Single corn kernel wide-line NMR oil analysis for breeding purpose

    Energy Technology Data Exchange (ETDEWEB)

    Wilmers, M C.C.; Rettori, C; Vargas, H; Barberis, G E [Universidade Estadual de Campinas (Brazil). Inst. de Fisica; da Silva, W J [Universidade Estadual de Campinas (Brazil). Inst. de Biologia

    1978-12-01

    The Wide-Line NMR technique was used to determine the oil content in single corn seeds. Using distinct radio frequency (RF) power, a systematic work was done in kernels with about 10% of moisture, and also in artificially dried seeds with approximated 5% of moisture. For nondried seeds NMR spectra showed clearly the presence of three resonances with different RF saturation factor. For dried seeds, the oil concentration determined by NMR was highly correlated (r = 0,997) with that determined by a gravimetric method. The highest discrepancy between the two methods was found to be about 1,3%. When relative measurements are required as in the case of single kernel for recurrent selection program, precision in the individual selected kernel will be about 2,5%. Applying this technique, a first cycle of recurrent selection using S/sub 1/ lines for low and high oil content was performed in an open pollinated variety. Gain from selection was 12.0 and 14.1% in the populations for high and low oil contents, respectively.

  12. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  13. The Use of Canonical Correlation Analysis to Assess the Relationship Between Executive Functioning and Verbal Memory in Older Adults

    Directory of Open Access Journals (Sweden)

    Pedro Silva Moreira MSc

    2015-08-01

    Full Text Available Executive functioning (EF, which is considered to govern complex cognition, and verbal memory (VM are constructs assumed to be related. However, it is not known the magnitude of the association between EF and VM, and how sociodemographic and psychological factors may affect this relationship, including in normal aging. In this study, we assessed different EF and VM parameters, via a battery of neurocognitive/psychological tests, and performed a Canonical Correlation Analysis (CCA to explore the connection between these constructs, in a sample of middle-aged and older healthy individuals without cognitive impairment ( N = 563, 50+ years of age. The analysis revealed a positive and moderate association between EF and VM independently of gender, age, education, global cognitive performance level, and mood. These results confirm that EF presents a significant association with VM performance.

  14. On the coupling of statistic sum of canonical and large canonical ensemble of interacting particles

    International Nuclear Information System (INIS)

    Vall, A.N.

    2000-01-01

    Potentiality of refining the known result based on analytic properties of a great statistical sum, as a function of the absolute activity of the boundary integral contribution into statistical sum, is considered. A strict asymptotic ratio between statistical sums of canonical and large canonical ensemble of interacting particles was derived [ru

  15. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  16. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  17. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  18. The Literary Canon in the Age of New Media

    DEFF Research Database (Denmark)

    Backe, Hans-Joachim

    2015-01-01

    and mediality of the canon. In a development that has largely gone unnoticed outside German speaking countries, new approaches for discussing current and future processes of canonization have been developed in recent years. One pivotal element of this process has been a thorough re-evaluation new media......The article offers a comparative overview of the diverging courses of the canon debate in Anglophone and Germanophone contexts. While the Anglophone canon debate has focused on the politics of canon composition, the Germanophone canon debate has been more concerned with the malleability...

  19. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  20. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  1. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  2. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  3. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  4. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  5. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  6. Sparse canonical methods for biological data integration: application to a cross-platform study

    Directory of Open Access Journals (Sweden)

    Robert-Granié Christèle

    2009-01-01

    Full Text Available Abstract Background In the context of systems biology, few sparse approaches have been proposed so far to integrate several data sets. It is however an important and fundamental issue that will be widely encountered in post genomic studies, when simultaneously analyzing transcriptomics, proteomics and metabolomics data using different platforms, so as to understand the mutual interactions between the different data sets. In this high dimensional setting, variable selection is crucial to give interpretable results. We focus on a sparse Partial Least Squares approach (sPLS to handle two-block data sets, where the relationship between the two types of variables is known to be symmetric. Sparse PLS has been developed either for a regression or a canonical correlation framework and includes a built-in procedure to select variables while integrating data. To illustrate the canonical mode approach, we analyzed the NCI60 data sets, where two different platforms (cDNA and Affymetrix chips were used to study the transcriptome of sixty cancer cell lines. Results We compare the results obtained with two other sparse or related canonical correlation approaches: CCA with Elastic Net penalization (CCA-EN and Co-Inertia Analysis (CIA. The latter does not include a built-in procedure for variable selection and requires a two-step analysis. We stress the lack of statistical criteria to evaluate canonical correlation methods, which makes biological interpretation absolutely necessary to compare the different gene selections. We also propose comprehensive graphical representations of both samples and variables to facilitate the interpretation of the results. Conclusion sPLS and CCA-EN selected highly relevant genes and complementary findings from the two data sets, which enabled a detailed understanding of the molecular characteristics of several groups of cell lines. These two approaches were found to bring similar results, although they highlighted the same

  7. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  8. An introduction to the theory of canonical matrices

    CERN Document Server

    Turnbull, H W

    2004-01-01

    Thorough and self-contained, this penetrating study of the theory of canonical matrices presents a detailed consideration of all the theory's principal features. Topics include elementary transformations and bilinear and quadratic forms; canonical reduction of equivalent matrices; subgroups of the group of equivalent transformations; and rational and classical canonical forms. The final chapters explore several methods of canonical reduction, including those of unitary and orthogonal transformations. 1952 edition. Index. Appendix. Historical notes. Bibliographies. 275 problems.

  9. Fan fiction, early Greece, and the historicity of canon

    Directory of Open Access Journals (Sweden)

    Ahuvia Kahane

    2016-03-01

    Full Text Available The historicity of canon is considered with an emphasis on contemporary fan fiction and early Greek oral epic traditions. The essay explores the idea of canon by highlighting historical variance, exposing wider conceptual isomorphisms, and formulating a revised notion of canonicity. Based on an analysis of canon in early Greece, the discussion moves away from the idea of canon as a set of valued works and toward canon as a practice of containment in response to inherent states of surplus. This view of canon is applied to the practice of fan fiction, reestablishing the idea of canonicity in fluid production environments within a revised, historically specific understanding in early oral traditions on the one hand and in digital cultures and fan fiction on the other. Several examples of early epigraphic Greek texts embedded in oral environments are analyzed and assessed in terms of their implications for an understanding of fan fiction and its modern contexts.

  10. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  11. Canonical cortical circuits: current evidence and theoretical implications

    Directory of Open Access Journals (Sweden)

    Capone F

    2016-04-01

    Full Text Available Fioravante Capone,1,2 Matteo Paolucci,1,2 Federica Assenza,1,2 Nicoletta Brunelli,1,2 Lorenzo Ricci,1,2 Lucia Florio,1,2 Vincenzo Di Lazzaro1,2 1Unit of Neurology, Neurophysiology, Neurobiology, Department of Medicine, Università Campus Bio-Medico di Roma, Rome, Italy; 2Fondazione Alberto Sordi – Research Institute for Aging, Rome, ItalyAbstract: Neurophysiological and neuroanatomical studies have found that the same basic structural and functional organization of neuronal circuits exists throughout the cortex. This kind of cortical organization, termed canonical circuit, has been functionally demonstrated primarily by studies involving visual striate cortex, and then, the concept has been extended to different cortical areas. In brief, the canonical circuit is composed of superficial pyramidal neurons of layers II/III receiving different inputs and deep pyramidal neurons of layer V that are responsible for cortex output. Superficial and deep pyramidal neurons are reciprocally connected, and inhibitory interneurons participate in modulating the activity of the circuit. The main intuition of this model is that the entire cortical network could be modeled as the repetition of relatively simple modules composed of relatively few types of excitatory and inhibitory, highly interconnected neurons. We will review the origin and the application of the canonical cortical circuit model in the six sections of this paper. The first section (The origins of the concept of canonical circuit: the cat visual cortex reviews the experiments performed in the cat visual cortex, from the origin of the concept of canonical circuit to the most recent developments in the modelization of cortex. The second (The canonical circuit in neocortex and third (Toward a canonical circuit in agranular cortex sections try to extend the concept of canonical circuit to other cortical areas, providing some significant examples of circuit functioning in different cytoarchitectonic

  12. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  13. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.

    Science.gov (United States)

    Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin

    2018-03-02

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.

  14. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  15. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Science.gov (United States)

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. The canon as text for a biblical theology

    Directory of Open Access Journals (Sweden)

    James A. Loader

    2005-10-01

    Full Text Available The novelty of the canonical approach is questioned and its fascination at least partly traced to the Reformation, as well as to the post-Reformation’s need for a clear and authoritative canon to perform the function previously performed by the church. This does not minimise the elusiveness and deeply contradictory positions both within the canon and triggered by it. On the one hand, the canon itself is a centripetal phenomenon and does play an important role in exegesis and theology. Even so, on the other hand, it not only contains many difficulties, but also causes various additional problems of a formal as well as a theological nature. The question is mooted whether the canonical approach alleviates or aggravates the dilemma. Since this approach has become a major factor in Christian theology, aspects of the Christian canon are used to gauge whether “canon” is an appropriate category for eliminating difficulties that arise by virtue of its own existence. Problematic uses and appropriations of several Old Testament canons are advanced, as well as evidence in the New Testament of a consciousness that the “old” has been surpassed(“Überbietungsbewußtsein”. It is maintained that at least the Childs version of the canonical approach fails to smooth out these and similar difficulties. As a method it can cater for the New Testament’s (superior role as the hermeneutical standard for evaluating the Old, but flounders on its inability to create the theological unity it claims can solve religious problems exposed by Old Testament historical criticism. It is concluded that canon as a category cannot be dispensed with, but is useful for the opposite of the purpose to which it is conventionally put: far from bringing about theological “unity” or producing a standard for “correct” exegesis, it requires different readings of different canons.

  17. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  18. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  19. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  20. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  1. A Canonical Approach to the Argument/Adjunct Distinction

    Directory of Open Access Journals (Sweden)

    Diana Forker

    2014-01-01

    Full Text Available This paper provides an account of the argument/adjunct distinction implementing the 'canonical approach'. I identify five criteria (obligatoriness, latency, co-occurrence restrictions, grammatical relations, and iterability and seven diagnostic tendencies that can be used to distinguish canonical arguments from canonical adjuncts. I then apply the criteria and tendencies to data from the Nakh-Daghestanian language Hinuq. Hinuq makes extensive use of spatial cases for marking adjunct-like and argument-like NPs. By means of the criteria and tendencies it is possible to distinguish spatial NPs that come close to canonical arguments from those that are canonical adjuncts, and to place the remaining NPs bearing spatial cases within the argument-adjunct continuum.

  2. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  3. A quadratically regularized functional canonical correlation analysis for identifying the global structure of pleiotropy with NGS data.

    Science.gov (United States)

    Lin, Nan; Zhu, Yun; Fan, Ruzong; Xiong, Momiao

    2017-10-01

    Investigating the pleiotropic effects of genetic variants can increase statistical power, provide important information to achieve deep understanding of the complex genetic structures of disease, and offer powerful tools for designing effective treatments with fewer side effects. However, the current multiple phenotype association analysis paradigm lacks breadth (number of phenotypes and genetic variants jointly analyzed at the same time) and depth (hierarchical structure of phenotype and genotypes). A key issue for high dimensional pleiotropic analysis is to effectively extract informative internal representation and features from high dimensional genotype and phenotype data. To explore correlation information of genetic variants, effectively reduce data dimensions, and overcome critical barriers in advancing the development of novel statistical methods and computational algorithms for genetic pleiotropic analysis, we proposed a new statistic method referred to as a quadratically regularized functional CCA (QRFCCA) for association analysis which combines three approaches: (1) quadratically regularized matrix factorization, (2) functional data analysis and (3) canonical correlation analysis (CCA). Large-scale simulations show that the QRFCCA has a much higher power than that of the ten competing statistics while retaining the appropriate type 1 errors. To further evaluate performance, the QRFCCA and ten other statistics are applied to the whole genome sequencing dataset from the TwinsUK study. We identify a total of 79 genes with rare variants and 67 genes with common variants significantly associated with the 46 traits using QRFCCA. The results show that the QRFCCA substantially outperforms the ten other statistics.

  4. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  6. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  7. Stability Performance of Inductively Coupled Plasma Mass Spectrometry-Phenotyped Kernel Minerals Concentration and Grain Yield in Maize in Different Agro-Climatic Zones.

    Science.gov (United States)

    Mallikarjuna, Mallana Gowdra; Thirunavukkarasu, Nepolean; Hossain, Firoz; Bhat, Jayant S; Jha, Shailendra K; Rathore, Abhishek; Agrawal, Pawan Kumar; Pattanayak, Arunava; Reddy, Sokka S; Gularia, Satish Kumar; Singh, Anju Mahendru; Manjaiah, Kanchikeri Math; Gupta, Hari Shanker

    2015-01-01

    Deficiency of iron and zinc causes micronutrient malnutrition or hidden hunger, which severely affects ~25% of global population. Genetic biofortification of maize has emerged as cost effective and sustainable approach in addressing malnourishment of iron and zinc deficiency. Therefore, understanding the genetic variation and stability of kernel micronutrients and grain yield of the maize inbreds is a prerequisite in breeding micronutrient-rich high yielding hybrids to alleviate micronutrient malnutrition. We report here, the genetic variability and stability of the kernel micronutrients concentration and grain yield in a set of 50 maize inbred panel selected from the national and the international centres that were raised at six different maize growing regions of India. Phenotyping of kernels using inductively coupled plasma mass spectrometry (ICP-MS) revealed considerable variability for kernel minerals concentration (iron: 18.88 to 47.65 mg kg(-1); zinc: 5.41 to 30.85 mg kg(-1); manganese: 3.30 to 17.73 mg kg(-1); copper: 0.53 to 5.48 mg kg(-1)) and grain yield (826.6 to 5413 kg ha(-1)). Significant positive correlation was observed between kernel iron and zinc within (r = 0.37 to r = 0.52, p kernel minerals concentration and grain yield. Most of the variation was contributed by genotype main effect for kernel iron (39.6%), manganese (41.34%) and copper (41.12%), and environment main effects for both kernel zinc (40.5%) and grain yield (37.0%). Genotype main effect plus genotype-by-environment interaction (GGE) biplot identified several mega environments for kernel minerals and grain yield. Comparison of stability parameters revealed AMMI stability value (ASV) as the better representative of the AMMI stability parameters. Dynamic stability parameter GGE distance (GGED) showed strong and positive correlation with both mean kernel concentrations and grain yield. Inbreds (CM-501, SKV-775, HUZM-185) identified from the present investigation will be useful in

  8. An approach for generating trajectory-based dynamics which conserves the canonical distribution in the phase space formulation of quantum mechanics. II. Thermal correlation functions.

    Science.gov (United States)

    Liu, Jian; Miller, William H

    2011-03-14

    We show the exact expression of the quantum mechanical time correlation function in the phase space formulation of quantum mechanics. The trajectory-based dynamics that conserves the quantum canonical distribution-equilibrium Liouville dynamics (ELD) proposed in Paper I is then used to approximately evaluate the exact expression. It gives exact thermal correlation functions (of even nonlinear operators, i.e., nonlinear functions of position or momentum operators) in the classical, high temperature, and harmonic limits. Various methods have been presented for the implementation of ELD. Numerical tests of the ELD approach in the Wigner or Husimi phase space have been made for a harmonic oscillator and two strongly anharmonic model problems, for each potential autocorrelation functions of both linear and nonlinear operators have been calculated. It suggests ELD can be a potentially useful approach for describing quantum effects for complex systems in condense phase.

  9. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  10. A shortest-path graph kernel for estimating gene product semantic similarity

    Directory of Open Access Journals (Sweden)

    Alvarez Marco A

    2011-07-01

    Full Text Available Abstract Background Existing methods for calculating semantic similarity between gene products using the Gene Ontology (GO often rely on external resources, which are not part of the ontology. Consequently, changes in these external resources like biased term distribution caused by shifting of hot research topics, will affect the calculation of semantic similarity. One way to avoid this problem is to use semantic methods that are "intrinsic" to the ontology, i.e. independent of external knowledge. Results We present a shortest-path graph kernel (spgk method that relies exclusively on the GO and its structure. In spgk, a gene product is represented by an induced subgraph of the GO, which consists of all the GO terms annotating it. Then a shortest-path graph kernel is used to compute the similarity between two graphs. In a comprehensive evaluation using a benchmark dataset, spgk compares favorably with other methods that depend on external resources. Compared with simUI, a method that is also intrinsic to GO, spgk achieves slightly better results on the benchmark dataset. Statistical tests show that the improvement is significant when the resolution and EC similarity correlation coefficient are used to measure the performance, but is insignificant when the Pfam similarity correlation coefficient is used. Conclusions Spgk uses a graph kernel method in polynomial time to exploit the structure of the GO to calculate semantic similarity between gene products. It provides an alternative to both methods that use external resources and "intrinsic" methods with comparable performance.

  11. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  12. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  13. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  14. Analysis of chlorophyll content and its correlation with yield attributing traits on early varieties of maize (Zea mays L.

    Directory of Open Access Journals (Sweden)

    Bikal Ghimire

    2015-12-01

    Full Text Available Chlorophyll has direct roles on photosynthesis and hence closely relates to capacity for photosynthesis, development and yield of crops. With object to explore the roles of chlorophyll content and its relation with other yield attributing traits a field research was conducted using fourteen early genotypes of maize in RCBD design with three replications. Observations were made for Soil Plant Analysis Development (SPAD reading, ear weight, number of kernel row/ear, number of kernel/row, five hundred kernel weight and grain yield/hectare and these traits were analyzed using Analysis of Variance (ANOVA and correlation coefficient analysis. SPAD reading showed a non-significant variation among the genotypes while it revealed significant correlation with no. of kernel/row, grain yield/hectare and highly significant correlation with no. of kernel row/ear and ear weight which are the most yield determinative traits. For the trait grain yield/ha followed by number of kernel row/ear genotype ARUN-1EV has been found comparatively superior to ARUN-2 (standard check. Grain Yield/hectare was highly heritable (>0.6 while no. of kernel / row, SPAD reading, ear weight, number of kernel row/ear were moderately heritable (0.3-0.6. Correlation analysis and ANOVA revealed ARUN-1EV, comparatively superior to ARUN-2 (standard check, had higher SPAD reading than mean SPAD reading with significant correlation with no. of kernel/row, no. of kernel row/ear, ear weight and grain yield/ha which are all yield determinative traits . This showed positive and significant effect of chlorophyll content in grain yield of the maize.

  15. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  16. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  17. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  18. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  19. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  20. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  1. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  2. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  3. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  4. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  5. Sorption Kinetics for the Removal of Cadmium and Zinc onto Palm Kernel Shell Based Activated Carbon

    Directory of Open Access Journals (Sweden)

    Muhammad Muhammad

    2010-12-01

    Full Text Available The kinetics and mechanism of cadmium and zinc adsorption on palm kernel shell based activated carbons (PKSAC have been studied. A series of batch laboratory studies were conducted in order to investigate the suitability of palm kernel shell based activated carbon (PKSAC for the removal of cadmium (cadmium ions and zinc (zinc ions from their aqueous solutions. All batch experiments were carried out at pH 7.0 and a constant temperature of 30+-1°C using an incubator shaker that operated at 150 rpm. The kinetics investigated includes the pseudo first order, the pseudo-second order and the intraparticle diffusion models. The pseudo-second order model correlate excellently the experimental data, suggesting that chemisorption processes could be the rate-limiting step. Keywords: adsorption, cadmium, kinetics, palm kernel shell, zinc

  6. A model of individualized canonical microcircuits supporting cognitive operations.

    Directory of Open Access Journals (Sweden)

    Tim Kunze

    Full Text Available Major cognitive functions such as language, memory, and decision-making are thought to rely on distributed networks of a large number of basic elements, called canonical microcircuits. In this theoretical study we propose a novel canonical microcircuit model and find that it supports two basic computational operations: a gating mechanism and working memory. By means of bifurcation analysis we systematically investigate the dynamical behavior of the canonical microcircuit with respect to parameters that govern the local network balance, that is, the relationship between excitation and inhibition, and key intrinsic feedback architectures of canonical microcircuits. We relate the local behavior of the canonical microcircuit to cognitive processing and demonstrate how a network of interacting canonical microcircuits enables the establishment of spatiotemporal sequences in the context of syntax parsing during sentence comprehension. This study provides a framework for using individualized canonical microcircuits for the construction of biologically realistic networks supporting cognitive operations.

  7. Learning Rotation for Kernel Correlation Filter

    KAUST Repository

    Hamdi, Abdullah; Ghanem, Bernard

    2017-01-01

    . This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate

  8. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  9. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  10. Clinical Trials With Large Numbers of Variables: Important Advantages of Canonical Analysis.

    Science.gov (United States)

    Cleophas, Ton J

    2016-01-01

    Canonical analysis assesses the combined effects of a set of predictor variables on a set of outcome variables, but it is little used in clinical trials despite the omnipresence of multiple variables. The aim of this study was to assess the performance of canonical analysis as compared with traditional multivariate methods using multivariate analysis of covariance (MANCOVA). As an example, a simulated data file with 12 gene expression levels and 4 drug efficacy scores was used. The correlation coefficient between the 12 predictor and 4 outcome variables was 0.87 (P = 0.0001) meaning that 76% of the variability in the outcome variables was explained by the 12 covariates. Repeated testing after the removal of 5 unimportant predictor and 1 outcome variable produced virtually the same overall result. The MANCOVA identified identical unimportant variables, but it was unable to provide overall statistics. (1) Canonical analysis is remarkable, because it can handle many more variables than traditional multivariate methods such as MANCOVA can. (2) At the same time, it accounts for the relative importance of the separate variables, their interactions and differences in units. (3) Canonical analysis provides overall statistics of the effects of sets of variables, whereas traditional multivariate methods only provide the statistics of the separate variables. (4) Unlike other methods for combining the effects of multiple variables such as factor analysis/partial least squares, canonical analysis is scientifically entirely rigorous. (5) Limitations include that it is less flexible than factor analysis/partial least squares, because only 2 sets of variables are used and because multiple solutions instead of one is offered. We do hope that this article will stimulate clinical investigators to start using this remarkable method.

  11. Measurements of the Canonical Helicity Evolution of a Gyrating Kinked Flux Rope

    Science.gov (United States)

    von der Linden, J.; Sears, J.; Intrator, T.; You, S.

    2017-12-01

    Magnetic structures in the solar corona and planetary magnetospheres are often modelled as magnetic flux ropes governed by magnetohydrodynamics (MHD); however, inside these structures, as exhibited in reconnection, conversions between magnetic and kinetic energies occur over a wide range of scales. Flux ropes based on the flux of canonical momentum circulation extend the flux rope concept to include effects of finite particle momentum and present the distinct advantage of reconciling all plasma regimes - e.g. kinetic, two-fluid, and MHD - with the topological concept of helicity: twists, writhes, and linkages. This presentation shows the first visualization and analysis of the 3D dynamics of canonical flux ropes and their relative helicity evolution from laboratory measurements. Ion and electron canonical flux ropes are visualized from a dataset of Mach, triple, and Ḃ probe measurements at over 10,000 spatial locations of a gyrating kinked flux rope. The flux ropes co-gyrate with the peak density and electron temperature in and out of a measurement volume. The electron and ion canonical flux ropes twist with opposite handedness and the ion flux ropes writhe around the electron flux ropes. The relative cross helicity between the magnetic and ion flow vorticity flux ropes dominates the relative ion canonical helicity and is anti-correlated with the relative magnetic helicity. The 3D nature of the kink and a reverse eddy current affect the helicity evolution. This work is supported by DOE Grant DE-SC0010340 and the DOE Office of Science Graduate Student Research Program and prepared in part by LLNL under Contract DE-AC52-07NA27344. LLNL-ABS-735426

  12. Unsupervised detection and removal of muscle artifacts from scalp EEG recordings using canonical correlation analysis, wavelets and random forests.

    Science.gov (United States)

    Anastasiadou, Maria N; Christodoulakis, Manolis; Papathanasiou, Eleftherios S; Papacostas, Savvas S; Mitsis, Georgios D

    2017-09-01

    This paper proposes supervised and unsupervised algorithms for automatic muscle artifact detection and removal from long-term EEG recordings, which combine canonical correlation analysis (CCA) and wavelets with random forests (RF). The proposed algorithms first perform CCA and continuous wavelet transform of the canonical components to generate a number of features which include component autocorrelation values and wavelet coefficient magnitude values. A subset of the most important features is subsequently selected using RF and labelled observations (supervised case) or synthetic data constructed from the original observations (unsupervised case). The proposed algorithms are evaluated using realistic simulation data as well as 30min epochs of non-invasive EEG recordings obtained from ten patients with epilepsy. We assessed the performance of the proposed algorithms using classification performance and goodness-of-fit values for noisy and noise-free signal windows. In the simulation study, where the ground truth was known, the proposed algorithms yielded almost perfect performance. In the case of experimental data, where expert marking was performed, the results suggest that both the supervised and unsupervised algorithm versions were able to remove artifacts without affecting noise-free channels considerably, outperforming standard CCA, independent component analysis (ICA) and Lagged Auto-Mutual Information Clustering (LAMIC). The proposed algorithms achieved excellent performance for both simulation and experimental data. Importantly, for the first time to our knowledge, we were able to perform entirely unsupervised artifact removal, i.e. without using already marked noisy data segments, achieving performance that is comparable to the supervised case. Overall, the results suggest that the proposed algorithms yield significant future potential for improving EEG signal quality in research or clinical settings without the need for marking by expert

  13. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  14. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  15. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  16. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  17. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  18. QTL Mapping of Kernel Number-Related Traits and Validation of One Major QTL for Ear Length in Maize.

    Science.gov (United States)

    Huo, Dongao; Ning, Qiang; Shen, Xiaomeng; Liu, Lei; Zhang, Zuxin

    2016-01-01

    The kernel number is a grain yield component and an important maize breeding goal. Ear length, kernel number per row and ear row number are highly correlated with the kernel number per ear, which eventually determines the ear weight and grain yield. In this study, two sets of F2:3 families developed from two bi-parental crosses sharing one inbred line were used to identify quantitative trait loci (QTL) for four kernel number-related traits: ear length, kernel number per row, ear row number and ear weight. A total of 39 QTLs for the four traits were identified in the two populations. The phenotypic variance explained by a single QTL ranged from 0.4% to 29.5%. Additionally, 14 overlapping QTLs formed 5 QTL clusters on chromosomes 1, 4, 5, 7, and 10. Intriguingly, six QTLs for ear length and kernel number per row overlapped in a region on chromosome 1. This region was designated qEL1.10 and was validated as being simultaneously responsible for ear length, kernel number per row and ear weight in a near isogenic line-derived population, suggesting that qEL1.10 was a pleiotropic QTL with large effects. Furthermore, the performance of hybrids generated by crossing 6 elite inbred lines with two near isogenic lines at qEL1.10 showed the breeding value of qEL1.10 for the improvement of the kernel number and grain yield of maize hybrids. This study provides a basis for further fine mapping, molecular marker-aided breeding and functional studies of kernel number-related traits in maize.

  19. Restoring canonical partition functions from imaginary chemical potential

    Science.gov (United States)

    Bornyakov, V. G.; Boyda, D.; Goy, V.; Molochkov, A.; Nakamura, A.; Nikolaev, A.; Zakharov, V. I.

    2018-03-01

    Using GPGPU techniques and multi-precision calculation we developed the code to study QCD phase transition line in the canonical approach. The canonical approach is a powerful tool to investigate sign problem in Lattice QCD. The central part of the canonical approach is the fugacity expansion of the grand canonical partition functions. Canonical partition functions Zn(T) are coefficients of this expansion. Using various methods we study properties of Zn(T). At the last step we perform cubic spline for temperature dependence of Zn(T) at fixed n and compute baryon number susceptibility χB/T2 as function of temperature. After that we compute numerically ∂χ/∂T and restore crossover line in QCD phase diagram. We use improved Wilson fermions and Iwasaki gauge action on the 163 × 4 lattice with mπ/mρ = 0.8 as a sandbox to check the canonical approach. In this framework we obtain coefficient in parametrization of crossover line Tc(µ2B) = Tc(C-ĸµ2B/T2c) with ĸ = -0.0453 ± 0.0099.

  20. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  1. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  2. Canonical ensembles and nonzero density quantum chromodynamics

    International Nuclear Information System (INIS)

    Hasenfratz, A.; Toussaint, D.

    1992-01-01

    We study QCD with nonzero chemical potential on 4 4 lattices by averaging over the canonical partition functions, or sectors with fixed quark number. We derive a condensed matrix of size 2x3xL 3 whose eigenvalues can be used to find the canonical partition functions. We also experiment with a weight for configuration generation which respects the Z(3) symmetry which forces the canonical partition function to be zero for quark numbers that are not multiples of three. (orig.)

  3. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  4. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  5. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  6. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  7. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  8. Backlund transformations as canonical transformations

    International Nuclear Information System (INIS)

    Villani, A.; Zimerman, A.H.

    1977-01-01

    Toda and Wadati as well as Kodama and Wadati have shown that the Backlund transformations, for the exponential lattice equation, sine-Gordon equation, K-dV (Korteweg de Vries) equation and modifies K-dV equation, are canonical transformation. It is shown that the Backlund transformation for the Boussinesq equation, for a generalized K-dV equation, for a model equation for shallow water waves and for the nonlinear Schroedinger equation are also canonical transformations [pt

  9. El Escritor y las Normas del Canon Literario (The Writer and the Norms of the Literary Canon).

    Science.gov (United States)

    Policarpo, Alcibiades

    This paper speculates about whether a literary canon exists in contemporary Latin American literature, particularly in the prose genre. The paper points to Carlos Fuentes, Gabriel Garcia Marquez, and Mario Vargas Llosa as the three authors who might form this traditional and liberal canon with their works "La Muerte de Artemio Cruz"…

  10. Periodicity, the Canon and Sport

    Directory of Open Access Journals (Sweden)

    Thomas F. Scanlon

    2015-10-01

    Full Text Available The topic according to this title is admittedly a broad one, embracing two very general concepts of time and of the cultural valuation of artistic products. Both phenomena are, in the present view, largely constructed by their contemporary cultures, and given authority to a great extent from the prestige of the past. The antiquity of tradition brings with it a certain cachet. Even though there may be peripheral debates in any given society which question the specifics of periodization or canonicity, individuals generally accept the consensus designation of a sequence of historical periods and they accept a list of highly valued artistic works as canonical or authoritative. We will first examine some of the processes of periodization and of canon-formation, after which we will discuss some specific examples of how these processes have worked in the sport of two ancient cultures, namely Greece and Mesoamerica.

  11. Constructing canonical bases of quantized enveloping algebras

    OpenAIRE

    Graaf, W.A. de

    2001-01-01

    An algorithm for computing the elements of a given weight of the canonical basis of a quantized enveloping algebra is described. Subsequently, a similar algorithm is presented for computing the canonical basis of a finite-dimensional module.

  12. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  13. Canonical and Non-Canonical NF-κB Signaling Promotes Breast Cancer Tumor-Initiating Cells

    Science.gov (United States)

    Kendellen, Megan F.; Bradford, Jennifer W.; Lawrence, Cortney L.; Clark, Kelly S.; Baldwin, Albert S.

    2014-01-01

    Tumor-initiating cells (TICs) are a sub-population of cells that exhibit a robust ability to self-renew and contribute to the formation of primary tumors, the relapse of previously treated tumors, and the development of metastases. TICs have been identified in various tumors, including those of the breast, and are particularly enriched in the basal-like and claudin-low subtypes of breast cancer. The signaling pathways that contribute to the function and maintenance of TICs are under intense study. We explored the potential involvement of the NF-κB family of transcription factors in TICs in cell lines that are representative of basal-like and claudin-low breast cancer. NF-κB was found to be activated in breast cancer cells that form tumorspheres efficiently. Moreover, both canonical and non-canonical NF-κB signaling is required for these cells to self-renew in vitro and to form xenograft tumors efficiently in vivo using limiting dilutions of cells. Consistent with this, canonical and non-canonical NF-κB signaling is activated in TICs isolated from breast cancer cell lines. Experimental results indicate that NF-κB promotes the function of TICs by stimulating epithelial-to-mesenchymal transition (EMT) and by upregulating the expression of the inflammatory cytokines IL-1β and IL-6. The results suggest the use of NF-κB inhibitors for clinical therapy of certain breast cancers. PMID:23474754

  14. The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels

    International Nuclear Information System (INIS)

    Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi

    2015-01-01

    The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)

  15. Canonization in early twentieth-century Chinese art history’

    Directory of Open Access Journals (Sweden)

    Guo Hui

    2014-06-01

    Full Text Available Since the 1980s, the discussion of canons has been a dominant theme in the discipline of Western art history. Various concerns have emerged regarding ‘questions of artistic judgment’, ‘the history genesis of masterpieces’, ‘variations in taste’, ‘the social instruments of canonicity’, and ‘how canons disappear’. Western art historians have considered how the canon’s appearance in Western visual art embodies aesthetic, ideological, cultural, social, and symbolic values. In Chinese art history, the idea of a canon including masterpieces, important artists, and forms of art, dates back to the mid ninth century when Zhang Yanyuan wrote his painting history Record of Famous Painters of All the Dynasties. Faced with quite different political, economic, and social conditions amid the instability of the early twentieth century, Chinese scholars attempted to discover new canons for cultural orthodoxy and authority. Modern means for canonization, such as museums and exhibition displays, cultural and academic institutions, and massive art publications with image reproduction in good quality, brought the process up to an unprecedented speed. It is true that most of these means have comparable counterparts in pre-modern times. However, their enormous scope and overwhelming influence are far beyond the reach of their imperial counterparts. Through an inter-textual reading of the publications on Chinese art history in early twentieth-century China, this paper explores the transformation of canons in order to shed light on why and how canonical formation happened during the Republican period of China. Despite the diverse styles and strategies which Chinese writers used in their narratives, Chinese art historical books produced during the Republican period canonized and de-canonized artworks. In this paper, the discussion of these texts, with reference to other art historical works, comprises three parts: 1 canon formation of artistic forms

  16. The degeneracy problem in non-canonical inflation

    International Nuclear Information System (INIS)

    Easson, Damien A.; Powell, Brian A.

    2013-01-01

    While attempting to connect inflationary theories to observational physics, a potential difficulty is the degeneracy problem: a single set of observables maps to a range of different inflaton potentials. Two important classes of models affected by the degeneracy problem are canonical and non-canonical models, the latter marked by the presence of a non-standard kinetic term that generates observables beyond the scalar and tensor two-point functions on CMB scales. The degeneracy problem is manifest when these distinguishing observables go undetected. We quantify the size of the resulting degeneracy in this case by studying the most well-motivated non-canonical theory having Dirac-Born-Infeld Lagrangian. Beyond the scalar and tensor two-point functions on CMB scales, we then consider the possible detection of equilateral non-Gaussianity at Planck-precision and a measurement of primordial gravitational waves from prospective space-based laser interferometers. The former detection breaks the degeneracy with canonical inflation but results in poor reconstruction prospects, while the latter measurement enables a determination of n T which, while not breaking the degeneracy, can be shown to greatly improve the non-canonical reconstruction

  17. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  18. Automatic classification of retinal three-dimensional optical coherence tomography images using principal component analysis network with composite kernels.

    Science.gov (United States)

    Fang, Leyuan; Wang, Chong; Li, Shutao; Yan, Jun; Chen, Xiangdong; Rabbani, Hossein

    2017-11-01

    We present an automatic method, termed as the principal component analysis network with composite kernel (PCANet-CK), for the classification of three-dimensional (3-D) retinal optical coherence tomography (OCT) images. Specifically, the proposed PCANet-CK method first utilizes the PCANet to automatically learn features from each B-scan of the 3-D retinal OCT images. Then, multiple kernels are separately applied to a set of very important features of the B-scans and these kernels are fused together, which can jointly exploit the correlations among features of the 3-D OCT images. Finally, the fused (composite) kernel is incorporated into an extreme learning machine for the OCT image classification. We tested our proposed algorithm on two real 3-D spectral domain OCT (SD-OCT) datasets (of normal subjects and subjects with the macular edema and age-related macular degeneration), which demonstrated its effectiveness. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. Stable micron-scale holes are a general feature of canonical holins.

    Science.gov (United States)

    Savva, Christos G; Dewey, Jill S; Moussa, Samir H; To, Kam H; Holzenburg, Andreas; Young, Ry

    2014-01-01

    At a programmed time in phage infection cycles, canonical holins suddenly trigger to cause lethal damage to the cytoplasmic membrane, resulting in the cessation of respiration and the non-specific release of pre-folded, fully active endolysins to the periplasm. For the paradigm holin S105 of lambda, triggering is correlated with the formation of micron-scale membrane holes, visible as interruptions in the bilayer in cryo-electron microscopic images and tomographic reconstructions. Here we report that the size distribution of the holes is stable for long periods after triggering. Moreover, early triggering caused by an early lysis allele of S105 formed approximately the same number of holes, but the lesions were significantly smaller. In contrast, early triggering prematurely induced by energy poisons resulted in many fewer visible holes, consistent with previous sizing studies. Importantly, the unrelated canonical holins P2 Y and T4 T were found to cause the formation of holes of approximately the same size and number as for lambda. In contrast, no such lesions were visible after triggering of the pinholin S(21) 68. These results generalize the hole formation phenomenon for canonical holins. A model is presented suggesting the unprecedentedly large size of these holes is related to the timing mechanism. © 2013 John Wiley & Sons Ltd.

  20. Spanish Literature and Spectrality : Notes on a Haunted Canon

    NARCIS (Netherlands)

    Valdivia, Pablo

    In Spanish Literature, Crisis and Spectrality: Notes on a Haunted Canon, Prof. Dr. Pablo Valdivia analyses the contradictions and complexities of the Spanish traditional canon from a transnational approach. Valdivia explores this particular canon as a 'haunted house' by focusing on the specific

  1. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  2. Stability Performance of Inductively Coupled Plasma Mass Spectrometry-Phenotyped Kernel Minerals Concentration and Grain Yield in Maize in Different Agro-Climatic Zones.

    Directory of Open Access Journals (Sweden)

    Mallana Gowdra Mallikarjuna

    Full Text Available Deficiency of iron and zinc causes micronutrient malnutrition or hidden hunger, which severely affects ~25% of global population. Genetic biofortification of maize has emerged as cost effective and sustainable approach in addressing malnourishment of iron and zinc deficiency. Therefore, understanding the genetic variation and stability of kernel micronutrients and grain yield of the maize inbreds is a prerequisite in breeding micronutrient-rich high yielding hybrids to alleviate micronutrient malnutrition. We report here, the genetic variability and stability of the kernel micronutrients concentration and grain yield in a set of 50 maize inbred panel selected from the national and the international centres that were raised at six different maize growing regions of India. Phenotyping of kernels using inductively coupled plasma mass spectrometry (ICP-MS revealed considerable variability for kernel minerals concentration (iron: 18.88 to 47.65 mg kg(-1; zinc: 5.41 to 30.85 mg kg(-1; manganese: 3.30 to 17.73 mg kg(-1; copper: 0.53 to 5.48 mg kg(-1 and grain yield (826.6 to 5413 kg ha(-1. Significant positive correlation was observed between kernel iron and zinc within (r = 0.37 to r = 0.52, p < 0.05 and across locations (r = 0.44, p < 0.01. Variance components of the additive main effects and multiplicative interactions (AMMI model showed significant genotype and genotype × environment interaction for kernel minerals concentration and grain yield. Most of the variation was contributed by genotype main effect for kernel iron (39.6%, manganese (41.34% and copper (41.12%, and environment main effects for both kernel zinc (40.5% and grain yield (37.0%. Genotype main effect plus genotype-by-environment interaction (GGE biplot identified several mega environments for kernel minerals and grain yield. Comparison of stability parameters revealed AMMI stability value (ASV as the better representative of the AMMI stability parameters. Dynamic stability

  3. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  4. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  5. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  6. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    International Nuclear Information System (INIS)

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  7. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  8. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  9. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  10. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  11. Deviations from Wick's theorem in the canonical ensemble

    Science.gov (United States)

    Schönhammer, K.

    2017-07-01

    Wick's theorem for the expectation values of products of field operators for a system of noninteracting fermions or bosons plays an important role in the perturbative approach to the quantum many-body problem. A finite-temperature version holds in the framework of the grand canonical ensemble, but not for the canonical ensemble appropriate for systems with fixed particle number such as ultracold quantum gases in optical lattices. Here we present formulas for expectation values of products of field operators in the canonical ensemble using a method in the spirit of Gaudin's proof of Wick's theorem for the grand canonical case. The deviations from Wick's theorem are examined quantitatively for two simple models of noninteracting fermions.

  12. Upport vector machines for nonlinear kernel ARMA system identification.

    Science.gov (United States)

    Martínez-Ramón, Manel; Rojo-Alvarez, José Luis; Camps-Valls, Gustavo; Muñioz-Marí, Jordi; Navia-Vázquez, Angel; Soria-Olivas, Emilio; Figueiras-Vidal, Aníbal R

    2006-11-01

    Nonlinear system identification based on support vector machines (SVM) has been usually addressed by means of the standard SVM regression (SVR), which can be seen as an implicit nonlinear autoregressive and moving average (ARMA) model in some reproducing kernel Hilbert space (RKHS). The proposal of this letter is twofold. First, the explicit consideration of an ARMA model in an RKHS (SVM-ARMA2K) is proposed. We show that stating the ARMA equations in an RKHS leads to solving the regularized normal equations in that RKHS, in terms of the autocorrelation and cross correlation of the (nonlinearly) transformed input and output discrete time processes. Second, a general class of SVM-based system identification nonlinear models is presented, based on the use of composite Mercer's kernels. This general class can improve model flexibility by emphasizing the input-output cross information (SVM-ARMA4K), which leads to straightforward and natural combinations of implicit and explicit ARMA models (SVR-ARMA2K and SVR-ARMA4K). Capabilities of these different SVM-based system identification schemes are illustrated with two benchmark problems.

  13. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  14. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    Science.gov (United States)

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  15. Kernel-based noise filtering of neutron detector signals

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki

    2007-01-01

    This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests

  16. Intersubject information mapping: revealing canonical representations of complex natural stimuli

    Directory of Open Access Journals (Sweden)

    Nikolaus Kriegeskorte

    2015-03-01

    Full Text Available Real-world time-continuous stimuli such as video promise greater naturalism for studies of brain function. However, modeling the stimulus variation is challenging and introduces a bias in favor of particular descriptive dimensions. Alternatively, we can look for brain regions whose signal is correlated between subjects, essentially using one subject to model another. Intersubject correlation mapping (ICM allows us to find brain regions driven in a canonical manner across subjects by a complex natural stimulus. However, it requires a direct voxel-to-voxel match between the spatiotemporal activity patterns and is thus only sensitive to common activations sufficiently extended to match up in Talairach space (or in an alternative, e.g. cortical-surface-based, common brain space. Here we introduce the more general approach of intersubject information mapping (IIM. For each brain region, IIM determines how much information is shared between the subjects' local spatiotemporal activity patterns. We estimate the intersubject mutual information using canonical correlation analysis applied to voxels within a spherical searchlight centered on each voxel in turn. The intersubject information estimate is invariant to linear transforms including spatial rearrangement of the voxels within the searchlight. This invariance to local encoding will be crucial in exploring fine-grained brain representations, which cannot be matched up in a common space and, more fundamentally, might be unique to each individual – like fingerprints. IIM yields a continuous brain map, which reflects intersubject information in fine-grained patterns. Performed on data from functional magnetic resonance imaging (fMRI of subjects viewing the same television show, IIM and ICM both highlighted sensory representations, including primary visual and auditory cortices. However, IIM revealed additional regions in higher association cortices, namely temporal pole and orbitofrontal cortex. These

  17. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    Science.gov (United States)

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  18. Canonical pseudotensors, Sparling's form and Noether currents

    International Nuclear Information System (INIS)

    Szabados, L.B.

    1991-09-01

    The canonical energy - momentum and spin pseudotensors of the Einstein theory are studied in two ways. First they are studied in the framework of Lagrangian formalism. It is shown, that for first order Lagrangian and rigid basis description the canonical energy - momentum, the canonical spin, and the Noether current are tensorial quantities, and the canonial energy - momentum and spin tensors satisfy the tensorial Belinfante-Rosenfeld equations. Then the differential geometric unification and reformulation of the previous different pseudotensorial approaches is given. Finally, for any vector field on the spacetime an (m-1) form, called the Noether form is defined. (K.A.) 34 refs

  19. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  20. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  1. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  2. The use of near infrared transmittance kernel sorting technology to salvage high quality grain from grain downgraded due to Fusarium damage

    Directory of Open Access Journals (Sweden)

    Michael E. Kautzman

    2015-03-01

    Full Text Available The mycotoxins associated with specific Fusarium fungal infections of grains are a threat to global food and feed security. These fungal infestations are referred to as Fusarium Head Blight (FHB and lead to Fusarium Damaged Kernels (FDK. Incidence of FDK >0.25% will lower the grade, with a tolerance of 5% FDK for export feed grain. During infestation, the fungi can produce a variety of mycotoxins, the most common being deoxynivalenol (DON. Fusarium Damaged Kernels have been associated with reduced crude protein (CP, lowering nutritional, functional and grade value. New technology has been developed using Near Infrared Transmittance (NIT spectra that estimate CP of individual kernels of wheat, barley and durum. Our objective is to evaluate the technology's capability to reduce FDK and DON of downgraded wheat and ability to salvage high quality safe kernels. In five FDK downgraded sources of wheat, the lowest 20% CP kernels had significantly increased FDK and DON with the high CP fractions having decreased FDK and DON, thousand kernel weights (TKW and bushel weight (Bu. Strong positive correlations were observed between FDK and DON (r = 0.90; FDK and grade (r = 0.62 and DON and grade (r = 0.62. Negative correlations were observed between FDK and DON with CP (r = −0.27 and −0.32; TKW (r = −0.45 and −0.54 and Bu (r = −0.79 and −0.74. Results show improved quality and value of Fusarium downgraded grain using this technology.

  3. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    Science.gov (United States)

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  6. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  7. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  8. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  9. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  10. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  11. Sequence detection analysis based on canonical correlation for steady-state visual evoked potential brain computer interfaces.

    Science.gov (United States)

    Cao, Lei; Ju, Zhengyu; Li, Jie; Jian, Rongjun; Jiang, Changjun

    2015-09-30

    Steady-state visual evoked potential (SSVEP) has been widely applied to develop brain computer interface (BCI) systems. The essence of SSVEP recognition is to recognize the frequency component of target stimulus focused by a subject significantly present in EEG spectrum. In this paper, a novel statistical approach based on sequence detection (SD) is proposed for improving the performance of SSVEP recognition. This method uses canonical correlation analysis (CCA) coefficients to observe SSVEP signal sequence. And then, a threshold strategy is utilized for SSVEP recognition. The result showed the classification performance with the longer duration of time window achieved the higher accuracy for most subjects. And the average time costing per trial was lower than the predefined recognition time. It was implicated that our approach could improve the speed of BCI system in contrast to other methods. Comparison with existing method(s): In comparison with other resultful algorithms, experimental accuracy of SD approach was better than those using a widely used CCA-based method and two newly proposed algorithms, least absolute shrinkage and selection operator (LASSO) recognition model as well as multivariate synchronization index (MSI) method. Furthermore, the information transfer rate (ITR) obtained by SD approach was higher than those using other three methods for most participants. These conclusions demonstrated that our proposed method was promising for a high-speed online BCI. Copyright © 2015 Elsevier B.V. All rights reserved.

  12. Free Fermions and the Classical Compact Groups

    Science.gov (United States)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-06-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  13. Free Fermions and the Classical Compact Groups

    Science.gov (United States)

    Cunden, Fabio Deelan; Mezzadri, Francesco; O'Connell, Neil

    2018-04-01

    There is a close connection between the ground state of non-interacting fermions in a box with classical (absorbing, reflecting, and periodic) boundary conditions and the eigenvalue statistics of the classical compact groups. The associated determinantal point processes can be extended in two natural directions: (i) we consider the full family of admissible quantum boundary conditions (i.e., self-adjoint extensions) for the Laplacian on a bounded interval, and the corresponding projection correlation kernels; (ii) we construct the grand canonical extensions at finite temperature of the projection kernels, interpolating from Poisson to random matrix eigenvalue statistics. The scaling limits in the bulk and at the edges are studied in a unified framework, and the question of universality is addressed. Whether the finite temperature determinantal processes correspond to the eigenvalue statistics of some matrix models is, a priori, not obvious. We complete the picture by constructing a finite temperature extension of the Haar measure on the classical compact groups. The eigenvalue statistics of the resulting grand canonical matrix models (of random size) corresponds exactly to the grand canonical measure of free fermions with classical boundary conditions.

  14. Canonical symmetry of a constrained Hamiltonian system and canonical Ward identity

    International Nuclear Information System (INIS)

    Li, Zi-ping

    1995-01-01

    An algorithm for the construction of the generators of the gauge transformation of a constrained Hamiltonian system is given. The relationships among the coefficients connecting the first constraints in the generator are made clear. Starting from the phase space generating function of the Green function, the Ward identity in canonical formalism is deduced. We point out that the quantum equations of motion in canonical form for a system with singular Lagrangian differ from the classical ones whether Dirac's conjecture holds true or not. Applications of the present formulation to the Abelian and non-Abelian gauge theories are given. The expressions for PCAC and generalized PCAC of the AVV vertex are derived exactly from another point of view. A new form of the Ward identity for gauge-ghost proper vertices is obtained which differs from the usual Ward-Takahashi identity arising from the BRS invariance

  15. Canonical transformations in problems of quantum statistical mechanics

    International Nuclear Information System (INIS)

    Sankovich, D.P.

    1985-01-01

    The problem of general canonical transformations in quantum systems possessing a classical analog is considered. The main role plays the Weyl representation of dynamic variables of the quantum system considered. One managed to build a general diagram of canonical transformations in a quantum case and to develop a method for reducing one or another operator to the simplest canonical form. In this case the procedure, being analogous to the Poincare-Birkhof normalization based on the Lie series theory, occurs

  16. The Resurrection of Jesus: do extra-canonical sources change the landscape?

    Directory of Open Access Journals (Sweden)

    F P Viljoen

    2005-10-01

    Full Text Available The resurrection of Jesus is assumed by the New Testament to be a historical event. Some scholars argue, however, that there was no empty tomb, but that the New Testament accounts are midrashic or mythological stories about Jesus.� In this article extra-canonical writings are investigated to find out what light it may throw on intra-canonical tradition. Many extra-canonical texts seemingly have no knowledge of the passion and resurrection, and such traditions may be earlier than the intra-canonical traditions. Was the resurrection a later invention?� Are intra-canonical texts developments of extra-canonical tradition, or vice versa?� This article demonstrates that extra-canonical texts do not materially alter the landscape of enquiry.

  17. Pair natural orbital and canonical coupled cluster reaction enthalpies involving light to heavy alkali and alkaline earth metals: the importance of sub-valence correlation.

    Science.gov (United States)

    Minenkov, Yury; Bistoni, Giovanni; Riplinger, Christoph; Auer, Alexander A; Neese, Frank; Cavallo, Luigi

    2017-04-05

    In this work, we tested canonical and domain based pair natural orbital coupled cluster methods (CCSD(T) and DLPNO-CCSD(T), respectively) for a set of 32 ligand exchange and association/dissociation reaction enthalpies involving ionic complexes of Li, Be, Na, Mg, Ca, Sr, Ba and Pb(ii). Two strategies were investigated: in the former, only valence electrons were included in the correlation treatment, giving rise to the computationally very efficient FC (frozen core) approach; in the latter, all non-ECP electrons were included in the correlation treatment, giving rise to the AE (all electron) approach. Apart from reactions involving Li and Be, the FC approach resulted in non-homogeneous performance. The FC approach leads to very small errors (correlation effects. These large errors are reduced to a few kcal mol -1 if the AE approach is used or the sub-valence orbitals of metals are included in the correlation treatment. On the technical side, the CCSD(T) and DLPNO-CCSD(T) results differ by a fraction of kcal mol -1 , indicating the latter method as the perfect choice when the CPU efficiency is essential. For completely black-box applications, as requested in catalysis or thermochemical calculations, we recommend the DLPNO-CCSD(T) method with all electrons that are not covered by effective core potentials included in the correlation treatment and correlation-consistent polarized core valence basis sets of cc-pwCVQZ(-PP) quality.

  18. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  19. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  20. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  1. Extracting drug mechanism and pharmacodynamic information from clinical electroencephalographic data using generalised semi-linear canonical correlation analysis

    International Nuclear Information System (INIS)

    Brain, P; Strimenopoulou, F; Ivarsson, M; Wilson, F J; Diukova, A; Wise, R G; Berry, E; Jolly, A; Hall, J E

    2014-01-01

    Conventional analysis of clinical resting electroencephalography (EEG) recordings typically involves assessment of spectral power in pre-defined frequency bands at specific electrodes. EEG is a potentially useful technique in drug development for measuring the pharmacodynamic (PD) effects of a centrally acting compound and hence to assess the likelihood of success of a novel drug based on pharmacokinetic–pharmacodynamic (PK–PD) principles. However, the need to define the electrodes and spectral bands to be analysed a priori is limiting where the nature of the drug-induced EEG effects is initially not known. We describe the extension to human EEG data of a generalised semi-linear canonical correlation analysis (GSLCCA), developed for small animal data. GSLCCA uses data from the whole spectrum, the entire recording duration and multiple electrodes. It provides interpretable information on the mechanism of drug action and a PD measure suitable for use in PK–PD modelling. Data from a study with low (analgesic) doses of the μ-opioid agonist, remifentanil, in 12 healthy subjects were analysed using conventional spectral edge analysis and GSLCCA. At this low dose, the conventional analysis was unsuccessful but plausible results consistent with previous observations were obtained using GSLCCA, confirming that GSLCCA can be successfully applied to clinical EEG data. (paper)

  2. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  3. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  4. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  5. Genomic-Enabled Prediction Kernel Models with Random Intercepts for Multi-environment Trials

    Science.gov (United States)

    Cuevas, Jaime; Granato, Italo; Fritsche-Neto, Roberto; Montesinos-Lopez, Osval A.; Burgueño, Juan; Bandeira e Sousa, Massaine; Crossa, José

    2018-01-01

    In this study, we compared the prediction accuracy of the main genotypic effect model (MM) without G×E interactions, the multi-environment single variance G×E deviation model (MDs), and the multi-environment environment-specific variance G×E deviation model (MDe) where the random genetic effects of the lines are modeled with the markers (or pedigree). With the objective of further modeling the genetic residual of the lines, we incorporated the random intercepts of the lines (l) and generated another three models. Each of these 6 models were fitted with a linear kernel method (Genomic Best Linear Unbiased Predictor, GB) and a Gaussian Kernel (GK) method. We compared these 12 model-method combinations with another two multi-environment G×E interactions models with unstructured variance-covariances (MUC) using GB and GK kernels (4 model-method). Thus, we compared the genomic-enabled prediction accuracy of a total of 16 model-method combinations on two maize data sets with positive phenotypic correlations among environments, and on two wheat data sets with complex G×E that includes some negative and close to zero phenotypic correlations among environments. The two models (MDs and MDE with the random intercept of the lines and the GK method) were computationally efficient and gave high prediction accuracy in the two maize data sets. Regarding the more complex G×E wheat data sets, the prediction accuracy of the model-method combination with G×E, MDs and MDe, including the random intercepts of the lines with GK method had important savings in computing time as compared with the G×E interaction multi-environment models with unstructured variance-covariances but with lower genomic prediction accuracy. PMID:29476023

  6. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    International Nuclear Information System (INIS)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays

  7. Dielectric properties of Zea mays kernels - studies for microwave power processing applications

    Energy Technology Data Exchange (ETDEWEB)

    Surducan, Emanoil; Neamtu, Camelia; Surducan, Vasile, E-mail: emanoil.surducan@itim-cj.r [National Institute for Research and Development of Isotopic and Molecular Technologies, 65-103 Donath, 400293 Cluj-Napoca (Romania)

    2009-08-01

    Microwaves absorption in biological samples can be predicted by their specific dielectrical properties. In this paper, the dielectric properties ({epsilon}' and {epsilon}'') of corn (Zea mays) kernels in the 500 MHz - 20 GHz frequencies range are presented. A short analysis of the microwaves absorption process is also presented, in correlation with the specific thermal properties of the samples, measured by simultaneous TGA-DSC method.

  8. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  9. Modern Canonical Quantum General Relativity

    Science.gov (United States)

    Thiemann, Thomas

    2008-11-01

    Preface; Notation and conventions; Introduction; Part I. Classical Foundations, Interpretation and the Canonical Quantisation Programme: 1. Classical Hamiltonian formulation of general relativity; 2. The problem of time, locality and the interpretation of quantum mechanics; 3. The programme of canonical quantisation; 4. The new canonical variables of Ashtekar for general relativity; Part II. Foundations of Modern Canonical Quantum General Relativity: 5. Introduction; 6. Step I: the holonomy-flux algebra [P]; 7. Step II: quantum-algebra; 8. Step III: representation theory of [A]; 9. Step IV: 1. Implementation and solution of the kinematical constraints; 10. Step V: 2. Implementation and solution of the Hamiltonian constraint; 11. Step VI: semiclassical analysis; Part III. Physical Applications: 12. Extension to standard matter; 13. Kinematical geometrical operators; 14. Spin foam models; 15. Quantum black hole physics; 16. Applications to particle physics and quantum cosmology; 17. Loop quantum gravity phenomenology; Part IV. Mathematical Tools and their Connection to Physics: 18. Tools from general topology; 19. Differential, Riemannian, symplectic and complex geometry; 20. Semianalytical category; 21. Elements of fibre bundle theory; 22. Holonomies on non-trivial fibre bundles; 23. Geometric quantisation; 24. The Dirac algorithm for field theories with constraints; 25. Tools from measure theory; 26. Elementary introduction to Gel'fand theory for Abelean C* algebras; 27. Bohr compactification of the real line; 28. Operatir -algebras and spectral theorem; 29. Refined algebraic quantisation (RAQ) and direct integral decomposition (DID); 30. Basics of harmonic analysis on compact Lie groups; 31. Spin network functions for SU(2); 32. + Functional analytical description of classical connection dynamics; Bibliography; Index.

  10. Decoding the auditory brain with canonical component analysis.

    Science.gov (United States)

    de Cheveigné, Alain; Wong, Daniel D E; Di Liberto, Giovanni M; Hjortkjær, Jens; Slaney, Malcolm; Lalor, Edmund

    2018-05-15

    The relation between a stimulus and the evoked brain response can shed light on perceptual processes within the brain. Signals derived from this relation can also be harnessed to control external devices for Brain Computer Interface (BCI) applications. While the classic event-related potential (ERP) is appropriate for isolated stimuli, more sophisticated "decoding" strategies are needed to address continuous stimuli such as speech, music or environmental sounds. Here we describe an approach based on Canonical Correlation Analysis (CCA) that finds the optimal transform to apply to both the stimulus and the response to reveal correlations between the two. Compared to prior methods based on forward or backward models for stimulus-response mapping, CCA finds significantly higher correlation scores, thus providing increased sensitivity to relatively small effects, and supports classifier schemes that yield higher classification scores. CCA strips the brain response of variance unrelated to the stimulus, and the stimulus representation of variance that does not affect the response, and thus improves observations of the relation between stimulus and response. Copyright © 2018 The Authors. Published by Elsevier Inc. All rights reserved.

  11. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  12. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  13. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  14. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  15. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  16. Contextuality in canonical systems of random variables

    Science.gov (United States)

    Dzhafarov, Ehtibar N.; Cervantes, Víctor H.; Kujala, Janne V.

    2017-10-01

    Random variables representing measurements, broadly understood to include any responses to any inputs, form a system in which each of them is uniquely identified by its content (that which it measures) and its context (the conditions under which it is recorded). Two random variables are jointly distributed if and only if they share a context. In a canonical representation of a system, all random variables are binary, and every content-sharing pair of random variables has a unique maximal coupling (the joint distribution imposed on them so that they coincide with maximal possible probability). The system is contextual if these maximal couplings are incompatible with the joint distributions of the context-sharing random variables. We propose to represent any system of measurements in a canonical form and to consider the system contextual if and only if its canonical representation is contextual. As an illustration, we establish a criterion for contextuality of the canonical system consisting of all dichotomizations of a single pair of content-sharing categorical random variables. This article is part of the themed issue `Second quantum revolution: foundational questions'.

  17. THE TOPOLOGY OF CANONICAL FLUX TUBES IN FLARED JET GEOMETRY

    Energy Technology Data Exchange (ETDEWEB)

    Lavine, Eric Sander; You, Setthivoine, E-mail: Slavine2@uw.edu, E-mail: syou@aa.washington.edu [University of Washington, 4000 15th Street, NE Aeronautics and Astronautics 211 Guggenheim Hall, Box 352400, Seattle, WA 98195 (United States)

    2017-01-20

    Magnetized plasma jets are generally modeled as magnetic flux tubes filled with flowing plasma governed by magnetohydrodynamics (MHD). We outline here a more fundamental approach based on flux tubes of canonical vorticity, where canonical vorticity is defined as the circulation of the species’ canonical momentum. This approach extends the concept of magnetic flux tube evolution to include the effects of finite particle momentum and enables visualization of the topology of plasma jets in regimes beyond MHD. A flared, current-carrying magnetic flux tube in an ion-electron plasma with finite ion momentum is thus equivalent to either a pair of electron and ion flow flux tubes, a pair of electron and ion canonical momentum flux tubes, or a pair of electron and ion canonical vorticity flux tubes. We examine the morphology of all these flux tubes for increasing electrical currents, different radial current profiles, different electron Mach numbers, and a fixed, flared, axisymmetric magnetic geometry. Calculations of gauge-invariant relative canonical helicities track the evolution of magnetic, cross, and kinetic helicities in the system, and show that ion flow fields can unwind to compensate for an increasing magnetic twist. The results demonstrate that including a species’ finite momentum can result in a very long collimated canonical vorticity flux tube even if the magnetic flux tube is flared. With finite momentum, particle density gradients must be normal to canonical vorticities, not to magnetic fields, so observations of collimated astrophysical jets could be images of canonical vorticity flux tubes instead of magnetic flux tubes.

  18. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  19. Non-canonical autophagy: an exception or an underestimated form of autophagy?

    Science.gov (United States)

    Scarlatti, Francesca; Maffei, Roberta; Beau, Isabelle; Ghidoni, Riccardo; Codogno, Patrice

    2008-11-01

    Macroautophagy (hereafter called autophagy) is a dynamic and evolutionarily conserved process used to sequester and degrade cytoplasm and entire organelles in a sequestering vesicle with a double membrane, known as the autophagosome, which ultimately fuses with a lysosome to degrade its autophagic cargo. Recently, we have unraveled two distinct forms of autophagy in cancer cells, which we term canonical and non-canonical autophagy. In contrast to classical or canonical autophagy, non-canonical autophagy is a process that does not require the entire set of autophagy-related (Atg) proteins in particular Beclin 1, to form the autophagosome. Non-canonical autophagy is therefore not blocked by the knockdown of Beclin 1 or of its binding partner hVps34. Moreover overexpression of Bcl-2, which is known to block canonical starvation-induced autophagy by binding to Beclin 1, is unable to reverse the non-canonical autophagy triggered by the polyphenol resveratrol in the breast cancer MCF-7 cell line. In MCF-7 cells, at least, non-canonical autophagy is involved in the caspase-independent cell death induced by resveratrol.

  20. Symmetric minimally entangled typical thermal states, grand-canonical ensembles, and the influence of the collapse bases

    Science.gov (United States)

    Binder, Moritz; Barthel, Thomas

    Based on DMRG, strongly correlated quantum many-body systems at finite temperatures can be simulated by sampling over a certain class of pure matrix product states (MPS) called minimally entangled typical thermal states (METTS). Here, we show how symmetries of the system can be exploited to considerably reduce computation costs in the METTS algorithm. While this is straightforward for the canonical ensemble, we introduce a modification of the algorithm to efficiently simulate the grand-canonical ensemble under utilization of symmetries. In addition, we construct novel symmetry-conserving collapse bases for the transitions in the Markov chain of METTS that improve the speed of convergence of the algorithm by reducing autocorrelations.

  1. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov (United States)

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  2. The Current Canon in British Romantics Studies.

    Science.gov (United States)

    Linkin, Harriet Kramer

    1991-01-01

    Describes and reports on a survey of 164 U.S. universities to ascertain what is taught as the current canon of British Romantic literature. Asserts that the canon may now include Mary Shelley with the former standard six major male Romantic poets, indicating a significant emergence of a feminist perspective on British Romanticism in the classroom.…

  3. Moving Targets: Constructing Canons, 2013–2014

    OpenAIRE

    Hirsch, BD

    2015-01-01

    This review essay considers early modern dramatic authorship and canons in the context of two recent publications: an anthology of plays -- William Shakespeare and Others: Collaborative Plays (2013), edited by Jonathan Bate and Eric Rasmussen as a companion volume to the RSC Complete Works -- and a monograph study -- Jeremy Lopez's Constructing the Canon of Early Modern Drama (2014).

  4. Long-distance wind-dispersal of spores in a fungal plant pathogen: estimation of anisotropic dispersal kernels from an extensive field experiment.

    Directory of Open Access Journals (Sweden)

    Adrien Rieux

    Full Text Available Given its biological significance, determining the dispersal kernel (i.e., the distribution of dispersal distances of spore-producing pathogens is essential. Here, we report two field experiments designed to measure disease gradients caused by sexually- and asexually-produced spores of the wind-dispersed banana plant fungus Mycosphaerella fijiensis. Gradients were measured during a single generation and over 272 traps installed up to 1000 m along eight directions radiating from a traceable source of inoculum composed of fungicide-resistant strains. We adjusted several kernels differing in the shape of their tail and tested for two types of anisotropy. Contrasting dispersal kernels were observed between the two types of spores. For sexual spores (ascospores, we characterized both a steep gradient in the first few metres in all directions and rare long-distance dispersal (LDD events up to 1000 m from the source in two directions. A heavy-tailed kernel best fitted the disease gradient. Although ascospores distributed evenly in all directions, average dispersal distance was greater in two different directions without obvious correlation with wind patterns. For asexual spores (conidia, few dispersal events occurred outside of the source plot. A gradient up to 12.5 m from the source was observed in one direction only. Accordingly, a thin-tailed kernel best fitted the disease gradient, and anisotropy in both density and distance was correlated with averaged daily wind gust. We discuss the validity of our results as well as their implications in terms of disease diffusion and management strategy.

  5. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  6. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  7. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  8. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  9. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  10. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  11. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee

    2011-01-01

    The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application

  13. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  14. A Fourier-series-based kernel-independent fast multipole method

    International Nuclear Information System (INIS)

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-01-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  15. Extending the random-phase approximation for electronic correlation energies: the renormalized adiabatic local density approximation

    DEFF Research Database (Denmark)

    Olsen, Thomas; Thygesen, Kristian S.

    2012-01-01

    The adiabatic connection fluctuation-dissipation theorem with the random phase approximation (RPA) has recently been applied with success to obtain correlation energies of a variety of chemical and solid state systems. The main merit of this approach is the improved description of dispersive forces...... while chemical bond strengths and absolute correlation energies are systematically underestimated. In this work we extend the RPA by including a parameter-free renormalized version of the adiabatic local-density (ALDA) exchange-correlation kernel. The renormalization consists of a (local) truncation...... of the ALDA kernel for wave vectors q > 2kF, which is found to yield excellent results for the homogeneous electron gas. In addition, the kernel significantly improves both the absolute correlation energies and atomization energies of small molecules over RPA and ALDA. The renormalization can...

  16. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  17. The dipole form of the gluon part of the BFKL kernel

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Grabovsky, A.V.; Papa, A.

    2007-01-01

    The dipole form of the gluon part of the color singlet BFKL kernel in the next-to-leading order (NLO) is obtained in the coordinate representation by direct transfer from the momentum representation, where the kernel was calculated before. With this paper the transformation of the NLO BFKL kernel to the dipole form, started a few months ago with the quark part of the kernel, is completed

  18. Image re-sampling detection through a novel interpolation kernel.

    Science.gov (United States)

    Hilal, Alaa

    2018-06-01

    Image re-sampling involved in re-size and rotation transformations is an essential element block in a typical digital image alteration. Fortunately, traces left from such processes are detectable, proving that the image has gone a re-sampling transformation. Within this context, we present in this paper two original contributions. First, we propose a new re-sampling interpolation kernel. It depends on five independent parameters that controls its amplitude, angular frequency, standard deviation, and duration. Then, we demonstrate its capacity to imitate the same behavior of the most frequent interpolation kernels used in digital image re-sampling applications. Secondly, the proposed model is used to characterize and detect the correlation coefficients involved in re-sampling transformations. The involved process includes a minimization of an error function using the gradient method. The proposed method is assessed over a large database of 11,000 re-sampled images. Additionally, it is implemented within an algorithm in order to assess images that had undergone complex transformations. Obtained results demonstrate better performance and reduced processing time when compared to a reference method validating the suitability of the proposed approaches. Copyright © 2018 Elsevier B.V. All rights reserved.

  19. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  20. The canonical ensemble redefined - 1: Formalism

    International Nuclear Information System (INIS)

    Venkataraman, R.

    1984-12-01

    For studying the thermodynamic properties of systems we propose an ensemble that lies in between the familiar canonical and microcanonical ensembles. We point out the transition from the canonical to microcanonical ensemble and prove from a comparative study that all these ensembles do not yield the same results even in the thermodynamic limit. An investigation of the coupling between two or more systems with these ensembles suggests that the state of thermodynamical equilibrium is a special case of statistical equilibrium. (author)

  1. Comparison of correlation analysis techniques for irregularly sampled time series

    Directory of Open Access Journals (Sweden)

    K. Rehfeld

    2011-06-01

    Full Text Available Geoscientific measurements often provide time series with irregular time sampling, requiring either data reconstruction (interpolation or sophisticated methods to handle irregular sampling. We compare the linear interpolation technique and different approaches for analyzing the correlation functions and persistence of irregularly sampled time series, as Lomb-Scargle Fourier transformation and kernel-based methods. In a thorough benchmark test we investigate the performance of these techniques.

    All methods have comparable root mean square errors (RMSEs for low skewness of the inter-observation time distribution. For high skewness, very irregular data, interpolation bias and RMSE increase strongly. We find a 40 % lower RMSE for the lag-1 autocorrelation function (ACF for the Gaussian kernel method vs. the linear interpolation scheme,in the analysis of highly irregular time series. For the cross correlation function (CCF the RMSE is then lower by 60 %. The application of the Lomb-Scargle technique gave results comparable to the kernel methods for the univariate, but poorer results in the bivariate case. Especially the high-frequency components of the signal, where classical methods show a strong bias in ACF and CCF magnitude, are preserved when using the kernel methods.

    We illustrate the performances of interpolation vs. Gaussian kernel method by applying both to paleo-data from four locations, reflecting late Holocene Asian monsoon variability as derived from speleothem δ18O measurements. Cross correlation results are similar for both methods, which we attribute to the long time scales of the common variability. The persistence time (memory is strongly overestimated when using the standard, interpolation-based, approach. Hence, the Gaussian kernel is a reliable and more robust estimator with significant advantages compared to other techniques and suitable for large scale application to paleo-data.

  2. Genetic variability of the phloem sap metabolite content of maize (Zea mays L.) during the kernel-filling period.

    Science.gov (United States)

    Yesbergenova-Cuny, Zhazira; Dinant, Sylvie; Martin-Magniette, Marie-Laure; Quilleré, Isabelle; Armengaud, Patrick; Monfalet, Priscilla; Lea, Peter J; Hirel, Bertrand

    2016-11-01

    Using a metabolomic approach, we have quantified the metabolite composition of the phloem sap exudate of seventeen European and American lines of maize that had been previously classified into five main groups on the basis of molecular marker polymorphisms. In addition to sucrose, glutamate and aspartate, which are abundant in the phloem sap of many plant species, large quantities of aconitate and alanine were also found in the phloem sap exudates of maize. Genetic variability of the phloem sap composition was observed in the different maize lines, although there was no obvious relationship between the phloem sap composition and the five previously classified groups. However, following hierarchical clustering analysis there was a clear relationship between two of the subclusters of lines defined on the basis of the composition of the phloem sap exudate and the earliness of silking date. A comparison between the metabolite contents of the ear leaves and the phloem sap exudates of each genotype, revealed that the relative content of most of the carbon- and nitrogen-containing metabolites was similar. Correlation studies performed between the metabolite content of the phloem sap exudates and yield-related traits also revealed that for some carbohydrates such as arabitol and sucrose there was a negative or positive correlation with kernel yield and kernel weight respectively. A posititive correlation was also found between kernel number and soluble histidine. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  3. A new discrete dipole kernel for quantitative susceptibility mapping.

    Science.gov (United States)

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    Directory of Open Access Journals (Sweden)

    Zhengbin Liu

    2016-08-01

    Full Text Available Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis. In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits.

  5. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  6. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    International Nuclear Information System (INIS)

    Khazaee, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine

  7. Scientific opinion on the acute health risks related to the presence of cyanogenic glycosides in raw apricot kernels and products derived from raw apricot kernels

    DEFF Research Database (Denmark)

    Petersen, Annette

    of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...

  8. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  9. Generalización del método de correlaciones canónicas aplicado a un problema de economía (Generalization of the canonical correlation method applied to an economy problem

    Directory of Open Access Journals (Sweden)

    Jorge Luis Azor Hernández

    2018-03-01

    Full Text Available Resumen. El Análisis de Correlaciones Canónicas (CCA desarrollado por Hotelling entre 1935 y 1936 es un método utilizado para estudiar las relaciones existentes entre dos conjuntos de variables. En 1968 Carroll introdujo una generalización de este método que permite analizar simultáneamente más de dos conjuntos de datos. Otra extensión de CCA es el Análisis de Correlaciones Canónicas Funcional que estudia las relaciones entre dos conjuntos de variables cuando estas transcurren en el tiempo. En este artículo se analiza, mediante estos métodos, la relación entre las ventas y el número de empleados en un problema de la economía mexicana English abstract. Canonical Correlation Analysis (CCA developed by Hotelling between 1935 and 1936 is a method used to study the relationships between two sets of variables. In 1968 Carroll introduced a generalization of this method to simultaneously analyze more than two datasets. Another extension of CCA is Functional Canonical Correlation Analysis that studies the relationships between two sets of variables when they pass in time. In this article, the relationship between sales and the number of employees in a problem of the Mexican economy is analyzed through these methods.

  10. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  11. Non‐Canonical Replication Initiation: You’re Fired!

    Directory of Open Access Journals (Sweden)

    Bazilė Ravoitytė

    2017-01-01

    Full Text Available The division of prokaryotic and eukaryotic cells produces two cells that inherit a perfect copy of the genetic material originally derived from the mother cell. The initiation of canonical DNA replication must be coordinated to the cell cycle to ensure the accuracy of genome duplication. Controlled replication initiation depends on a complex interplay of cis‐acting DNA sequences, the so‐called origins of replication (ori, with trans‐acting factors involved in the onset of DNA synthesis. The interplay of cis‐acting elements and trans‐acting factors ensures that cells initiate replication at sequence‐specific sites only once, and in a timely order, to avoid chromosomal endoreplication. However, chromosome breakage and excessive RNA:DNA hybrid formation can cause breakinduced (BIR or transcription‐initiated replication (TIR, respectively. These non‐canonical replication events are expected to affect eukaryotic genome function and maintenance, and could be important for genome evolution and disease development. In this review, we describe the difference between canonical and non‐canonical DNA replication, and focus on mechanistic differences and common features between BIR and TIR. Finally, we discuss open issues on the factors and molecular mechanisms involved in TIR.

  12. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  13. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  14. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    International Nuclear Information System (INIS)

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  15. metaCCA: summary statistics-based multivariate meta-analysis of genome-wide association studies using canonical correlation analysis.

    Science.gov (United States)

    Cichonska, Anna; Rousu, Juho; Marttinen, Pekka; Kangas, Antti J; Soininen, Pasi; Lehtimäki, Terho; Raitakari, Olli T; Järvelin, Marjo-Riitta; Salomaa, Veikko; Ala-Korpela, Mika; Ripatti, Samuli; Pirinen, Matti

    2016-07-01

    A dominant approach to genetic association studies is to perform univariate tests between genotype-phenotype pairs. However, analyzing related traits together increases statistical power, and certain complex associations become detectable only when several variants are tested jointly. Currently, modest sample sizes of individual cohorts, and restricted availability of individual-level genotype-phenotype data across the cohorts limit conducting multivariate tests. We introduce metaCCA, a computational framework for summary statistics-based analysis of a single or multiple studies that allows multivariate representation of both genotype and phenotype. It extends the statistical technique of canonical correlation analysis to the setting where original individual-level records are not available, and employs a covariance shrinkage algorithm to achieve robustness.Multivariate meta-analysis of two Finnish studies of nuclear magnetic resonance metabolomics by metaCCA, using standard univariate output from the program SNPTEST, shows an excellent agreement with the pooled individual-level analysis of original data. Motivated by strong multivariate signals in the lipid genes tested, we envision that multivariate association testing using metaCCA has a great potential to provide novel insights from already published summary statistics from high-throughput phenotyping technologies. Code is available at https://github.com/aalto-ics-kepaco anna.cichonska@helsinki.fi or matti.pirinen@helsinki.fi Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press.

  16. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  17. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  18. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  19. Genome-wide association study identifies candidate genes for starch content regulation in maize kernels

    Directory of Open Access Journals (Sweden)

    Na Liu

    2016-07-01

    Full Text Available Kernel starch content is an important trait in maize (Zea mays L. as it accounts for 65% to 75% of the dry kernel weight and positively correlates with seed yield. A number of starch synthesis-related genes have been identified in maize in recent years. However, many loci underlying variation in starch content among maize inbred lines still remain to be identified. The current study is a genome-wide association study that used a set of 263 maize inbred lines. In this panel, the average kernel starch content was 66.99%, ranging from 60.60% to 71.58% over the three study years. These inbred lines were genotyped with the SNP50 BeadChip maize array, which is comprised of 56,110 evenly spaced, random SNPs. Population structure was controlled by a mixed linear model (MLM as implemented in the software package TASSEL. After the statistical analyses, four SNPs were identified as significantly associated with starch content (P ≤ 0.0001, among which one each are located on chromosomes 1 and 5 and two are on chromosome 2. Furthermore, 77 candidate genes associated with starch synthesis were found within the 100-kb intervals containing these four QTLs, and four highly associated genes were within 20-kb intervals of the associated SNPs. Among the four genes, Glucose-1-phosphate adenylyltransferase (APS1; Gene ID GRMZM2G163437 is known as an important regulator of kernel starch content. The identified SNPs, QTLs, and candidate genes may not only be readily used for germplasm improvement by marker-assisted selection in breeding, but can also elucidate the genetic basis of starch content. Further studies on these identified candidate genes may help determine the molecular mechanisms regulating kernel starch content in maize and other important cereal crops.

  20. Pressure Prediction of Coal Slurry Transportation Pipeline Based on Particle Swarm Optimization Kernel Function Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Xue-cun Yang

    2015-01-01

    Full Text Available For coal slurry pipeline blockage prediction problem, through the analysis of actual scene, it is determined that the pressure prediction from each measuring point is the premise of pipeline blockage prediction. Kernel function of support vector machine is introduced into extreme learning machine, the parameters are optimized by particle swarm algorithm, and blockage prediction method based on particle swarm optimization kernel function extreme learning machine (PSOKELM is put forward. The actual test data from HuangLing coal gangue power plant are used for simulation experiments and compared with support vector machine prediction model optimized by particle swarm algorithm (PSOSVM and kernel function extreme learning machine prediction model (KELM. The results prove that mean square error (MSE for the prediction model based on PSOKELM is 0.0038 and the correlation coefficient is 0.9955, which is superior to prediction model based on PSOSVM in speed and accuracy and superior to KELM prediction model in accuracy.

  1. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Interrelations between different canonical descriptions of dissipative systems

    International Nuclear Information System (INIS)

    Schuch, D; Guerrero, J; López-Ruiz, F F; Aldaya, V

    2015-01-01

    There are many approaches for the description of dissipative systems coupled to some kind of environment. This environment can be described in different ways; only effective models are being considered here. In the Bateman model, the environment is represented by one additional degree of freedom and the corresponding momentum. In two other canonical approaches, no environmental degree of freedom appears explicitly, but the canonical variables are connected with the physical ones via non-canonical transformations. The link between the Bateman approach and those without additional variables is achieved via comparison with a canonical approach using expanding coordinates, as, in this case, both Hamiltonians are constants of motion. This leads to constraints that allow for the elimination of the additional degree of freedom in the Bateman approach. These constraints are not unique. Several choices are studied explicitly, and the consequences for the physical interpretation of the additional variable in the Bateman model are discussed. (paper)

  3. Interrelations between different canonical descriptions of dissipative systems

    Science.gov (United States)

    Schuch, D.; Guerrero, J.; López-Ruiz, F. F.; Aldaya, V.

    2015-04-01

    There are many approaches for the description of dissipative systems coupled to some kind of environment. This environment can be described in different ways; only effective models are being considered here. In the Bateman model, the environment is represented by one additional degree of freedom and the corresponding momentum. In two other canonical approaches, no environmental degree of freedom appears explicitly, but the canonical variables are connected with the physical ones via non-canonical transformations. The link between the Bateman approach and those without additional variables is achieved via comparison with a canonical approach using expanding coordinates, as, in this case, both Hamiltonians are constants of motion. This leads to constraints that allow for the elimination of the additional degree of freedom in the Bateman approach. These constraints are not unique. Several choices are studied explicitly, and the consequences for the physical interpretation of the additional variable in the Bateman model are discussed.

  4. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo

    2016-01-01

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970

  5. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Directory of Open Access Journals (Sweden)

    Jaime Cuevas

    2017-01-01

    Full Text Available The phenomenon of genotype × environment (G × E interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects ( u that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP and Gaussian (Gaussian kernel, GK. The other model has the same genetic component as the first model ( u plus an extra component, f, that captures random effects between environments that were not captured by the random effects u . We used five CIMMYT data sets (one maize and four wheat that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u   and   f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u .

  6. Assessment of vertebral artery stents using 16-slice multi-detector row CT angiography in vivo evaluation: Comparison of a medium-smooth kernel and a sharp kernel

    International Nuclear Information System (INIS)

    Yoo, Won Jong; Lim, Yeon Soo; Ahn, Kook Jin; Choi, Byung Gil; Kim, Ji Young; Kim, Sung Hoon

    2009-01-01

    Objectives: To assess the lumen visibility of extracranial vertebral artery stents examined with 16-slice multi-detector row computed tomography (MDCT) angiography in vivo using a medium-smooth kernel (B30s) and a sharp kernel (B60s), and to compare these with digital subtraction angiography (DSA) after stent placement. Methods: Twenty stents from 20 patients (14 men, 6 women; mean age, 62.7 ± 10.1 years) who underwent CT angiography (CTA) with 16-slice MDCT were retrospectively analyzed. In CT angiograms using a B30s and a B60s, the lumen diameters and CT attenuations of the stented vessels were measured three times by three observers, and artificial luminal narrowing (ALN) was calculated. To assess measurement reliability on CT angiograms, the intraclass correlation coefficient (ICC) was used. DSA served as the reference standard for the in-stent luminal measurements on CT angiography. The median interval between CT angiography and DSA was 1 day (range 1-10). Results: For interobserver reliability, intraclass correlation coefficients for the lumen diameters on CT angiograms with a B30s and a B60s were 0.90 and 0.96, respectively. The lumen diameters on CT angiograms using a B30s were consistently smaller than that on CT angiograms using a B60s (p < 0.01). The mean ALN was 37 ± 7% on CT angiograms using a B30s and 25 ± 9% on CT angiograms using a B60s. The mean CT attenuation in in-stent lumen was 347 ± 55 HU on CT angiograms using a B30s and 295 ± 46 HU on CT angiograms using a B60s. The ALN and CT attenuation within the stented vessels between CT angiograms using a B30s and a B60s was significant (p < 0.01). Conclusions: 16-slice MDCT using a sharp kernel allows good visualization of the stented vessels and is useful in the assessment of vertebral artery stent patency after stent placement.

  7. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  8. Canonic FFT flow graphs for real-valued even/odd symmetric inputs

    Science.gov (United States)

    Lao, Yingjie; Parhi, Keshab K.

    2017-12-01

    Canonic real-valued fast Fourier transform (RFFT) has been proposed to reduce the arithmetic complexity by eliminating redundancies. In a canonic N-point RFFT, the number of signal values at each stage is canonic with respect to the number of signal values, i.e., N. The major advantage of the canonic RFFTs is that these require the least number of butterfly operations and only real datapaths when mapped to architectures. In this paper, we consider the FFT computation whose inputs are not only real but also even/odd symmetric, which indeed lead to the well-known discrete cosine and sine transforms (DCTs and DSTs). Novel algorithms for generating the flow graphs of canonic RFFTs with even/odd symmetric inputs are proposed. It is shown that the proposed algorithms lead to canonic structures with N/2 +1 signal values at each stage for an N-point real even symmetric FFT (REFFT) or N/2 -1 signal values at each stage for an N-point RFFT real odd symmetric FFT (ROFFT). In order to remove butterfly operations, several twiddle factor transformations are proposed in this paper. We also discuss the design of canonic REFFT for any composite length. Performances of the canonic REFFT/ROFFT are also discussed. It is shown that the flow graph of canonic REFFT/ROFFT has less number of interconnections, less butterfly operations, and less twiddle factor operations, compared to prior works.

  9. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  10. On flame kernel formation and propagation in premixed gases

    Energy Technology Data Exchange (ETDEWEB)

    Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  11. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  12. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  13. Covariant canonical quantization of fields and Bohmian mechanics

    International Nuclear Information System (INIS)

    Nikolic, H.

    2005-01-01

    We propose a manifestly covariant canonical method of field quantization based on the classical De Donder-Weyl covariant canonical formulation of field theory. Owing to covariance, the space and time arguments of fields are treated on an equal footing. To achieve both covariance and consistency with standard non-covariant canonical quantization of fields in Minkowski spacetime, it is necessary to adopt a covariant Bohmian formulation of quantum field theory. A preferred foliation of spacetime emerges dynamically owing to a purely quantum effect. The application to a simple time-reparametrization invariant system and quantum gravity is discussed and compared with the conventional non-covariant Wheeler-DeWitt approach. (orig.)

  14. Canonical formalism for relativistic dynamics

    International Nuclear Information System (INIS)

    Penafiel-Nava, V.M.

    1982-01-01

    The possibility of a canonical formalism appropriate for a dynamical theory of isolated relativistic multiparticle systems involving scalar interactions is studied. It is shown that a single time-parameter structure satisfying the requirements of Poincare invariance and simultaneity of the constituents (global tranversality) can not be derived from a homogeneous Lagrangian. The dynamics is deduced initially from a non-homogeneous but singular Lagrangian designed to accommodate the global tranversality constraints with the equaltime plane associated to the total momentum of the system. An equivalent standard Lagrangian is used to generalize the parametrization procedure which is referred to an arbitrary geodesic in Minkowski space. The equations of motion and the definition of center of momentum are invariant with respect to the choice of geodesic and the entire formalism becomes separable. In the original 8N-dimensional phase-space, the symmetries of the Lagrangian give rise to a canonical realization of a fifteen-generator Lie algebra which is projected in the 6N dimensional hypersurface of dynamical motions. The time-component of the total momentum is thus reduced to a neutral element and the canonical Hamiltonian survives as the only generator for time-translations so that the no-interaction theorem becomes inapplicable

  15. LCPT: a program for finding linear canonical transformations

    International Nuclear Information System (INIS)

    Char, B.W.; McNamara, B.

    1979-01-01

    This article describes a MACSYMA program to compute symbolically a canonical linear transformation between coordinate systems. The difficulties in implementation of this canonical small physics problem are also discussed, along with the implications that may be drawn from such difficulties about widespread MACSYMA usage by the community of computational/theoretical physicists

  16. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  17. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  18. Optimal kernel shape and bandwidth for atomistic support of continuum stress

    International Nuclear Information System (INIS)

    Ulz, Manfred H; Moran, Sean J

    2013-01-01

    The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)

  19. Dickkopf-1 induced apoptosis in human placental choriocarcinoma is independent of canonical Wnt signaling

    International Nuclear Information System (INIS)

    Peng Sha; Miao Chenglin; Li Jing; Fan Xiujun; Cao Yujing; Duan Enkui

    2006-01-01

    Placental choriocarcinoma, a reproductive system carcinoma in women, has about 0.81% occurrence frequency in China, which leads to over 90% lethality due to indistinct pathogenesis and the absence of efficient therapeutic treatment. In the present study, using immunostaining and reverse transcription PCR, we reported that Dickkopf-1 (Dkk-1) is prominently expressed in human cytotrophoblast (CTB) cell, but absent in the human placental choriocarcinoma cell line JAR and JEG3, implicating an unknown correlation between Dkk-1 and carcinogenesis of placental choriocarcinoma. Further, through exogenous introduction of Dkk-1, we found repressed proliferation in JAR and JEG3, induced apoptosis in JAR, and discovered significant tumor suppression effects of Dkk-1 in placental choriocarcinoma. Moreover we found that this function of Dkk-1 is achieved through c-Jun N-terminal kinase (JNK), whereas the canonical Wnt pathway may not have a great role. This discovery is not symphonic to previous functional understanding of Dkk-1, a canonical Wnt signaling antagonist. Together, our data indicate the possible correlation between Dkk-1 and human placental choriocarcinoma and suggest potential applications of Dkk-1 in treatment of human placental choriocarcinomas

  20. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  1. Compactly Supported Basis Functions as Support Vector Kernels for Classification.

    Science.gov (United States)

    Wittek, Peter; Tan, Chew Lim

    2011-10-01

    Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

  2. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Science.gov (United States)

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  3. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints. This pa......Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints......). Second, a heterogeneous multi-core architecture is investigated, focusing on its performance in relation to hard real-time constraints and predictable behavior. Third, the hardware implementation of HARTEX is designated to support the heterogeneous multi-core architecture. This hardware kernel has...... several advantages over a similar kernel implemented in software: higher-speed processing capability, parallel computation, and separation between the kernel itself and the applications being run. A microbenchmark has been used to compare the hardware kernel with the software kernel, and compare...

  4. Taylor-expansion Monte Carlo simulations of classical fluids in the canonical and grand canonical ensemble

    International Nuclear Information System (INIS)

    Schoen, M.

    1995-01-01

    In this article the Taylor-expansion method is introduced by which Monte Carlo (MC) simulations in the canonical ensemble can be speeded up significantly, Substantial gains in computational speed of 20-40% over conventional implementations of the MC technique are obtained over a wide range of densities in homogeneous bulk phases. The basic philosophy behind the Taylor-expansion method is a division of the neighborhood of each atom (or molecule) into three different spatial zones. Interactions between atoms belonging to each zone are treated at different levels of computational sophistication. For example, only interactions between atoms belonging to the primary zone immediately surrounding an atom are treated explicitly before and after displacement. The change in the configurational energy contribution from secondary-zone interactions is obtained from the first-order term of a Taylor expansion of the configurational energy in terms of the displacement vector d. Interactions with atoms in the tertiary zone adjacent to the secondary zone are neglected throughout. The Taylor-expansion method is not restricted to the canonical ensemble but may be employed to enhance computational efficiency of MC simulations in other ensembles as well. This is demonstrated for grand canonical ensemble MC simulations of an inhomogeneous fluid which can be performed essentially on a modern personal computer

  5. Canonical simulations with worldlines: An exploratory study in ϕ24 lattice field theory

    Science.gov (United States)

    Orasch, Oliver; Gattringer, Christof

    2018-01-01

    In this paper, we explore the perspectives for canonical simulations in the worldline formulation of a lattice field theory. Using the charged ϕ4 field in two dimensions as an example, we present the details of the canonical formulation based on worldlines and outline the algorithmic strategies for canonical worldline simulations. We discuss the steps for converting the data from the canonical approach to the grand canonical picture which we use for cross-checking our results. The canonical approach presented here can easily be generalized to other lattice field theories with a worldline representation.

  6. Defining the Relationship of Student Achievement Between STEM Subjects Through Canonical Correlation Analysis of 2011 Trends in International Mathematics and Science Study (TIMSS) Data

    Science.gov (United States)

    O'Neal, Melissa Jean

    Canonical correlation analysis was used to analyze data from Trends in International Mathematics and Science Study (TIMSS) 2011 achievement databases encompassing information from fourth/eighth grades. Student achievement in life science/biology was correlated with achievement in mathematics and other sciences across three analytical areas: mathematics and science student performance, achievement in cognitive domains, and achievement in content domains. Strong correlations between student achievement in life science/biology with achievement in mathematics and overall science occurred for both high- and low-performing education systems. Hence, partial emphases on the inter-subject connections did not always lead to a better student learning outcome in STEM education. In addition, student achievement in life science/biology was positively correlated with achievement in mathematics and science cognitive domains; these patterns held true for correlations of life science/biology with mathematics as well as other sciences. The importance of linking student learning experiences between and within STEM domains to support high performance on TIMSS assessments was indicated by correlations of moderate strength (57 TIMSS assessments was indicated by correlations of moderate strength (57 mathematics, and other sciences. At the eighth grade level, students who built increasing levels of cognitive complexity upon firm foundations were prepared for successful learning throughout their educational careers. The results from this investigation promote a holistic design of school learning opportunities to improve student achievement in life science/biology and other science, technology, engineering, and mathematics (STEM) subjects at the elementary and middle school levels. While the curriculum can vary from combined STEM subjects to separated mathematics or science courses, both professional learning communities (PLC) for teachers and problem-based learning (PBL) for learners can be

  7. Generalized synthetic kernel approximation for elastic moderation of fast neutrons

    International Nuclear Information System (INIS)

    Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.

    1975-01-01

    A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)

  8. Canonical and non-canonical barriers facing antimiR cancer therapeutics.

    Science.gov (United States)

    Cheng, Christopher J; Saltzman, W Mark; Slack, Frank J

    2013-01-01

    Once considered genetic "oddities", microRNAs (miRNAs) are now recognized as key epigenetic regulators of numerous biological processes, including some with a causal link to the pathogenesis, maintenance, and treatment of cancer. The crux of small RNA-based therapeutics lies in the antagonism of potent cellular targets; the main shortcoming of the field in general, lies in ineffective delivery. Inhibition of oncogenic miRNAs is a relatively nascent therapeutic concept, but as with predecessor RNA-based therapies, success hinges on delivery efficacy. This review will describes the canonical (e.g. pharmacokinetics and clearance, cellular uptake, endosome escape, etc.) and non-canonical (e.g. spatial localization and accessibility of miRNA, technical limitations of miRNA inhibition, off-target impacts, etc.) challenges to the delivery of antisense-based anti-miRNA therapeutics (i.e. antimiRs) for the treatment of cancer. Emphasis will be placed on how the current leading antimiR platforms-ranging from naked chemically modified oligonucleotides to nanoscale delivery vehicles-are affected by and overcome these barriers. The perplexity of antimiR delivery presents both engineering and biological hurdles that must be overcome in order to capitalize on the extensive pharmacological benefits of antagonizing tumor-associated miRNAs.

  9. Effect of Palm Kernel Cake Replacement and Enzyme ...

    African Journals Online (AJOL)

    A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...

  10. Independent genetic control of maize (Zea mays L.) kernel weight determination and its phenotypic plasticity.

    Science.gov (United States)

    Alvarez Prado, Santiago; Sadras, Víctor O; Borrás, Lucas

    2014-08-01

    Maize kernel weight (KW) is associated with the duration of the grain-filling period (GFD) and the rate of kernel biomass accumulation (KGR). It is also related to the dynamics of water and hence is physiologically linked to the maximum kernel water content (MWC), kernel desiccation rate (KDR), and moisture concentration at physiological maturity (MCPM). This work proposed that principles of phenotypic plasticity can help to consolidated the understanding of the environmental modulation and genetic control of these traits. For that purpose, a maize population of 245 recombinant inbred lines (RILs) was grown under different environmental conditions. Trait plasticity was calculated as the ratio of the variance of each RIL to the overall phenotypic variance of the population of RILs. This work found a hierarchy of plasticities: KDR ≈ GFD > MCPM > KGR > KW > MWC. There was no phenotypic and genetic correlation between traits per se and trait plasticities. MWC, the trait with the lowest plasticity, was the exception because common quantitative trait loci were found for the trait and its plasticity. Independent genetic control of a trait per se and genetic control of its plasticity is a condition for the independent evolution of traits and their plasticities. This allows breeders potentially to select for high or low plasticity in combination with high or low values of economically relevant traits. © The Author 2014. Published by Oxford University Press on behalf of the Society for Experimental Biology. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  11. Escort entropies and divergences and related canonical distribution

    International Nuclear Information System (INIS)

    Bercher, J.-F.

    2011-01-01

    We discuss two families of two-parameter entropies and divergences, derived from the standard Renyi and Tsallis entropies and divergences. These divergences and entropies are found as divergences or entropies of escort distributions. Exploiting the nonnegativity of the divergences, we derive the expression of the canonical distribution associated to the new entropies and a observable given as an escort-mean value. We show that this canonical distribution extends, and smoothly connects, the results obtained in nonextensive thermodynamics for the standard and generalized mean value constraints. -- Highlights: → Two-parameter entropies are derived from q-entropies and escort distributions. → The related canonical distribution is derived. → This connects and extends known results in nonextensive statistics.

  12. Canonical quantization of gravity and a problem of scattering

    International Nuclear Information System (INIS)

    Rubakov, V.A.

    1980-01-01

    Linearized theory of gravity is quantized both in a naive way and as a proper limit of the Dirac-Wheeler-De Witt approach to the quantization of the full theory. The equivalence between the two approaches is established. The problem of scattering in the canonically quantized theory of gravitation is investigated. The concept of the background metric naturally appears in the canonical formalism for this case. The equivalence between canonical and path-integral approaches is established for the problem of scattering. Some kinetical properties of functionals in Wheeler superspace are studied in an appendix. (author)

  13. Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition

    NARCIS (Netherlands)

    Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja

    2012-01-01

    We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an

  14. Flour quality and kernel hardness connection in winter wheat

    Directory of Open Access Journals (Sweden)

    Szabó B. P.

    2016-12-01

    Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.

  15. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  16. Influence of differently processed mango seed kernel meal on ...

    African Journals Online (AJOL)

    Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.

  17. Quasiperiodic canonical-cell tiling with pseudo icosahedral symmetry

    Science.gov (United States)

    Fujita, Nobuhisa

    2017-10-01

    Icosahedral quasicrystals and their approximants are generally described as packing of icosahedral clusters. Experimental studies show that clusters in various approximants are orderly arranged, such that their centers are located at the nodes (or vertices) of a periodic tiling composed of four basic polyhedra called the canonical cells. This so called canonical-cell geometry is likely to serve as a common framework for modeling how clusters are arranged in approximants, while its applicability seems to extend naturally to icosahedral quasicrystals. To date, however, it has not been proved yet if the canonical cells can tile the space quasiperiodically, though we usually believe that clusters in icosahedral quasicrystals are arranged such that quasiperiodic long-range order as well as icosahedral point symmetry is maintained. In this paper, we report for the first time an iterative geometrical transformation of the canonical cells defining a so-called substitution rule, which we can use to generate a class of quasiperiodic canonical-cell tilings. Every single step of the transformation proceeds as follows: each cell is first enlarged by a magnification ratio of τ3 (τ = golden mean) and then subdivided into cells of the original size. Here, cells with an identical shape can be subdivided in several distinct manners depending on how their adjacent neighbors are arranged, and sixteen types of cells are identified in terms of unique subdivision. This class of quasiperiodic canonical-cell tilings presents the first realization of three-dimensional quasiperiodic tilings with fractal atomic surfaces. There are four distinct atomic surfaces associated with four sub-modules of the primitive icosahedral module, where a representative of the four submodules corresponds to the Σ = 4 coincidence site module of the icosahedral module. It follows that the present quasiperiodic tilings involve a kind of superlattice ordering that manifests itself in satellite peaks in the

  18. Towards smart energy systems: application of kernel machine regression for medium term electricity load forecasting.

    Science.gov (United States)

    Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H

    2016-01-01

    Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF.

  19. CANONICAL RELATIONS BETWEEN BASIC AND MOTOR - SITUATIONAL-MOTOR SKILLS IN SPORT GAMES

    Directory of Open Access Journals (Sweden)

    Bećir Šabotić

    2013-07-01

    Full Text Available The aim of this study was to establish the correlation between the predictor-basic motor and situational-motor tests in sports games. On the sample of 62 subjects of the first year of high school was carried out measurements which covered 12 basic and 6 motor variables and situational tests in volleyball and basketball.Based on the results of the canonical correlation analysis, it can be concluded that there is a significant relationship between the predictor variables and a set of criterion variables, situational-motor tests basketball and volleyball. These results are logical given the structure of movements from basketball and volleyball that require a high level of coordination and speed.

  20. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali

    2017-06-06

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar\\'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.