Regularization techniques for PSF-matching kernels - I. Choice of kernel basis
Becker, A. C.; Homrighausen, D.; Connolly, A. J.; Genovese, C. R.; Owen, R.; Bickerton, S. J.; Lupton, R. H.
2012-09-01
We review current methods for building point spread function (PSF)-matching kernels for the purposes of image subtraction or co-addition. Such methods use a linear decomposition of the kernel on a series of basis functions. The correct choice of these basis functions is fundamental to the efficiency and effectiveness of the matching - the chosen bases should represent the underlying signal using a reasonably small number of shapes, and/or have a minimum number of user-adjustable tuning parameters. We examine methods whose bases comprise multiple Gauss-Hermite polynomials, as well as a form-free basis composed of delta-functions. Kernels derived from delta-functions are unsurprisingly shown to be more expressive; they are able to take more general shapes and perform better in situations where sum-of-Gaussian methods are known to fail. However, due to its many degrees of freedom (the maximum number allowed by the kernel size) this basis tends to overfit the problem and yields noisy kernels having large variance. We introduce a new technique to regularize these delta-function kernel solutions, which bridges the gap between the generality of delta-function kernels and the compactness of sum-of-Gaussian kernels. Through this regularization we are able to create general kernel solutions that represent the intrinsic shape of the PSF-matching kernel with only one degree of freedom, the strength of the regularization λ. The role of λ is effectively to exchange variance in the resulting difference image with variance in the kernel itself. We examine considerations in choosing the value of λ, including statistical risk estimators and the ability of the solution to predict solutions for adjacent areas. Both of these suggest moderate strengths of λ between 0.1 and 1.0, although this optimization is likely data set dependent. This model allows for flexible representations of the convolution kernel that have significant predictive ability and will prove useful in implementing
Directory of Open Access Journals (Sweden)
Muzhir Shaban Al-Ani
2011-05-01
Full Text Available Detecting faces across multiple views is more challenging than in a frontal view. To address this problem,an efficient approach is presented in this paper using a kernel machine based approach for learning suchnonlinear mappings to provide effective view-based representation for multi-view face detection. In thispaper Kernel Principal Component Analysis (KPCA is used to project data into the view-subspaces thencomputed as view-based features. Multi-view face detection is performed by classifying each input imageinto face or non-face class, by using a two class Kernel Support Vector Classifier (KSVC. Experimentalresults demonstrate successful face detection over a wide range of facial variation in color, illuminationconditions, position, scale, orientation, 3D pose, and expression in images from several photo collections.
2014-03-27
SALIENT FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION...FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION THESIS Presented...SALIENT FEATURE IDENTIFICATION AND ANALYSIS USING KERNEL-BASED CLASSIFICATION TECHNIQUES FOR SYNTHETIC APERTURE RADAR AUTOMATIC TARGET RECOGNITION
Seismic Hazard Analysis Using the Adaptive Kernel Density Estimation Technique for Chennai City
Ramanna, C. K.; Dodagoudar, G. R.
2012-01-01
Conventional method of probabilistic seismic hazard analysis (PSHA) using the Cornell-McGuire approach requires identification of homogeneous source zones as the first step. This criterion brings along many issues and, hence, several alternative methods to hazard estimation have come up in the last few years. Methods such as zoneless or zone-free methods, modelling of earth's crust using numerical methods with finite element analysis, have been proposed. Delineating a homogeneous source zone in regions of distributed seismicity and/or diffused seismicity is rather a difficult task. In this study, the zone-free method using the adaptive kernel technique to hazard estimation is explored for regions having distributed and diffused seismicity. Chennai city is in such a region with low to moderate seismicity so it has been used as a case study. The adaptive kernel technique is statistically superior to the fixed kernel technique primarily because the bandwidth of the kernel is varied spatially depending on the clustering or sparseness of the epicentres. Although the fixed kernel technique has proven to work well in general density estimation cases, it fails to perform in the case of multimodal and long tail distributions. In such situations, the adaptive kernel technique serves the purpose and is more relevant in earthquake engineering as the activity rate probability density surface is multimodal in nature. The peak ground acceleration (PGA) obtained from all the three approaches (i.e., the Cornell-McGuire approach, fixed kernel and adaptive kernel techniques) for 10% probability of exceedance in 50 years is around 0.087 g. The uniform hazard spectra (UHS) are also provided for different structural periods.
A novel noise optimization technique for inductively degenerated CMOS LNA
Institute of Scientific and Technical Information of China (English)
Geng Zhiqing; Wang Haiyong; Wu Nanjian
2009-01-01
This paper proposes a novel noise optimization technique. The technique gives analytical formulae for the noise performance of inductively degenerated CMOS low noise amplifier (LNA) circuits with an ideal gate inductor for a fixed bias voltage and nonideal gate inductor for a fixed power dissipation, respectively, by mathematical analysis and reasonable approximation methods. LNA circuits with required noise figure can be designed effectively and rapidly just by using hand calculations of the proposed formulae. We design a 1.8 GHz LNA in a TSMC 0.25 pan CMOS process. The measured results show a noise figure of 1.6 dB with a forward gain of 14.4 dB at a power consumption of 5 mW, demonstrating that the designed LNA circuits can achieve low noise figure levels at low power dissipation.
A novel noise optimization technique for inductively degenerated CMOS LNA
Zhiqing, Geng; Haiyong, Wang; Nanjian, Wu
2009-10-01
This paper proposes a novel noise optimization technique. The technique gives analytical formulae for the noise performance of inductively degenerated CMOS low noise amplifier (LNA) circuits with an ideal gate inductor for a fixed bias voltage and nonideal gate inductor for a fixed power dissipation, respectively, by mathematical analysis and reasonable approximation methods. LNA circuits with required noise figure can be designed effectively and rapidly just by using hand calculations of the proposed formulae. We design a 1.8 GHz LNA in a TSMC 0.25 μm CMOS process. The measured results show a noise figure of 1.6 dB with a forward gain of 14.4 dB at a power consumption of 5 mW, demonstrating that the designed LNA circuits can achieve low noise figure levels at low power dissipation.
Kernel-based machine learning techniques for infrasound signal classification
Tuma, Matthias; Igel, Christian; Mialle, Pierrick
2014-05-01
Infrasound monitoring is one of four remote sensing technologies continuously employed by the CTBTO Preparatory Commission. The CTBTO's infrasound network is designed to monitor the Earth for potential evidence of atmospheric or shallow underground nuclear explosions. Upon completion, it will comprise 60 infrasound array stations distributed around the globe, of which 47 were certified in January 2014. Three stages can be identified in CTBTO infrasound data processing: automated processing at the level of single array stations, automated processing at the level of the overall global network, and interactive review by human analysts. At station level, the cross correlation-based PMCC algorithm is used for initial detection of coherent wavefronts. It produces estimates for trace velocity and azimuth of incoming wavefronts, as well as other descriptive features characterizing a signal. Detected arrivals are then categorized into potentially treaty-relevant versus noise-type signals by a rule-based expert system. This corresponds to a binary classification task at the level of station processing. In addition, incoming signals may be grouped according to their travel path in the atmosphere. The present work investigates automatic classification of infrasound arrivals by kernel-based pattern recognition methods. It aims to explore the potential of state-of-the-art machine learning methods vis-a-vis the current rule-based and task-tailored expert system. To this purpose, we first address the compilation of a representative, labeled reference benchmark dataset as a prerequisite for both classifier training and evaluation. Data representation is based on features extracted by the CTBTO's PMCC algorithm. As classifiers, we employ support vector machines (SVMs) in a supervised learning setting. Different SVM kernel functions are used and adapted through different hyperparameter optimization routines. The resulting performance is compared to several baseline classifiers. All
Directory of Open Access Journals (Sweden)
Jose M. Bernal-de-Lázaro
2016-05-01
Full Text Available This article summarizes the main contributions of the PhD thesis titled: "Application of learning techniques based on kernel methods for the fault diagnosis in Industrial processes". This thesis focuses on the analysis and design of fault diagnosis systems (DDF based on historical data. Specifically this thesis provides: (1 new criteria for adjustment of the kernel methods used to select features with a high discriminative capacity for the fault diagnosis tasks, (2 a proposed approach process monitoring using statistical techniques multivariate that incorporates a reinforced information concerning to the dynamics of the Hotelling's T2 and SPE statistics, whose combination with kernel methods improves the detection of small-magnitude faults; (3 an robustness index to compare the diagnosis classifiers performance taking into account their insensitivity to possible noise and disturbance on historical data.
Kernel-Based Discriminant Techniques for Educational Placement
Lin, Miao-hsiang; Huang, Su-yun; Chang, Yuan-chin
2004-01-01
This article considers the problem of educational placement. Several discriminant techniques are applied to a data set from a survey project of science ability. A profile vector for each student consists of five science-educational indicators. The students are intended to be placed into three reference groups: advanced, regular, and remedial.…
Comparison of surgical techniques in the treatment of laryngeal polypoid degeneration.
Lumpkin, S M; Bishop, S G; Bennett, S
1987-01-01
Surgical excision has been the accepted treatment of laryngeal polypoid degeneration, or chronic polypoid corditis. We report on 29 women with polypoid degeneration who received one of three surgical treatments: vocal fold stripping, carbon dioxide laser obliteration, or the Hirano technique. The duration of postoperative dysphonia was longest with the laser removal and shortest with the Hirano technique. A combination of vocal hygiene management and the Hirano technique of removal provided the most efficacious treatment.
Miraliakbari, A.; Sok, S.; Ouma, Y. O.; Hahn, M.
2016-06-01
With the increasing demand for the digital survey and acquisition of road pavement conditions, there is also the parallel growing need for the development of automated techniques for the analysis and evaluation of the actual road conditions. This is due in part to the resulting large volumes of road pavement data captured through digital surveys, and also to the requirements for rapid data processing and evaluations. In this study, the Canon 5D Mark II RGB camera with a resolution of 21 megapixels is used for the road pavement condition mapping. Even though many imaging and mapping sensors are available, the development of automated pavement distress detection, recognition and extraction systems for pavement condition is still a challenge. In order to detect and extract pavement cracks, a comparative evaluation of kernel-based segmentation methods comprising line filtering (LF), local binary pattern (LBP) and high-pass filtering (HPF) is carried out. While the LF and LBP methods are based on the principle of rotation-invariance for pattern matching, the HPF applies the same principle for filtering, but with a rotational invariant matrix. With respect to the processing speeds, HPF is fastest due to the fact that it is based on a single kernel, as compared to LF and LBP which are based on several kernels. Experiments with 20 sample images which contain linear, block and alligator cracks are carried out. On an average a completeness of distress extraction with values of 81.2%, 76.2% and 81.1% have been found for LF, HPF and LBP respectively.
Directory of Open Access Journals (Sweden)
A. Miraliakbari
2016-06-01
Full Text Available With the increasing demand for the digital survey and acquisition of road pavement conditions, there is also the parallel growing need for the development of automated techniques for the analysis and evaluation of the actual road conditions. This is due in part to the resulting large volumes of road pavement data captured through digital surveys, and also to the requirements for rapid data processing and evaluations. In this study, the Canon 5D Mark II RGB camera with a resolution of 21 megapixels is used for the road pavement condition mapping. Even though many imaging and mapping sensors are available, the development of automated pavement distress detection, recognition and extraction systems for pavement condition is still a challenge. In order to detect and extract pavement cracks, a comparative evaluation of kernel-based segmentation methods comprising line filtering (LF, local binary pattern (LBP and high-pass filtering (HPF is carried out. While the LF and LBP methods are based on the principle of rotation-invariance for pattern matching, the HPF applies the same principle for filtering, but with a rotational invariant matrix. With respect to the processing speeds, HPF is fastest due to the fact that it is based on a single kernel, as compared to LF and LBP which are based on several kernels. Experiments with 20 sample images which contain linear, block and alligator cracks are carried out. On an average a completeness of distress extraction with values of 81.2%, 76.2% and 81.1% have been found for LF, HPF and LBP respectively.
DEFF Research Database (Denmark)
Chen, Tianshi; Andersen, Martin Skovgaard; Ljung, Lennart;
2014-01-01
Model estimation and structure detection with short data records are two issues that receive increasing interests in System Identification. In this paper, a multiple kernel-based regularization method is proposed to handle those issues. Multiple kernels are conic combinations of fixed kernels...
Bobodzhanov, A. A.; Safonov, V. F.
2016-04-01
We consider an algorithm for constructing asymptotic solutions regularized in the sense of Lomov (see [1], [2]). We show that such problems can be reduced to integro-differential equations with inverse time. But in contrast to known papers devoted to this topic (see, for example, [3]), in this paper we study a fundamentally new case, which is characterized by the absence, in the differential part, of a linear operator that isolates, in the asymptotics of the solution, constituents described by boundary functions and by the fact that the integral operator has kernel with diagonal degeneration of high order. Furthermore, the spectrum of the regularization operator A(t) (see below) may contain purely imaginary eigenvalues, which causes difficulties in the application of the methods of construction of asymptotic solutions proposed in the monograph [3]. Based on an analysis of the principal term of the asymptotics, we isolate a class of inhomogeneities and initial data for which the exact solution of the original problem tends to the limit solution (as \\varepsilon\\to+0) on the entire time interval under consideration, also including a boundary-layer zone (that is, we solve the so-called initialization problem). The paper is of a theoretical nature and is designed to lead to a greater understanding of the problems in the theory of singular perturbations. There may be applications in various applied areas where models described by integro-differential equations are used (for example, in elasticity theory, the theory of electrical circuits, and so on).
Kamal, Ahmed K
2007-01-01
The experimental procedure of lowering and raising a leg while the subject is in the supine position is considered to stimulate and entrain the autonomic nervous system of fifteen untreated patients with Parkinson's disease and fifteen age and sex matched control subjects. The assessment of autonomic function for each group is achieved using an algorithm based on Volterra kernel estimation. By applying this algorithm and considering the process of lowering and raising a leg as stimulus input and the Heart Rate Variability signal (HRV) as output for system identification, a mathematical model is expressed as integral equations. The integral equations are considered and fixed for control subjects and Parkinson's disease patients so that the identification method reduced to the determination of the values within the integral called kernels, resulting in an integral equations whose input-output behavior is nearly identical to that of the system in both healthy subjects and Parkinson's disease patients. The model for each group contains the linear part (first order kernel) and quadratic part (second order kernel). A difference equation model was employed to represent the system for both control subjects and patients with Parkinson's disease. The results show significant difference in first order kernel(impulse response) and second order kernel (mesh diagram) for each group. Using first order kernel and second order kernel, it is possible to assess autonomic function qualitatively and quantitatively in both groups.
Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta
2016-09-01
An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM.
Energy Technology Data Exchange (ETDEWEB)
Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov
2016-02-25
Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.
Pattern Classification of Signals Using Fisher Kernels
Directory of Open Access Journals (Sweden)
Yashodhan Athavale
2012-01-01
Full Text Available The intention of this study is to gauge the performance of Fisher kernels for dimension simplification and classification of time-series signals. Our research work has indicated that Fisher kernels have shown substantial improvement in signal classification by enabling clearer pattern visualization in three-dimensional space. In this paper, we will exhibit the performance of Fisher kernels for two domains: financial and biomedical. The financial domain study involves identifying the possibility of collapse or survival of a company trading in the stock market. For assessing the fate of each company, we have collected financial time-series composed of weekly closing stock prices in a common time frame, using Thomson Datastream software. The biomedical domain study involves knee signals collected using the vibration arthrometry technique. This study uses the severity of cartilage degeneration for classifying normal and abnormal knee joints. In both studies, we apply Fisher Kernels incorporated with a Gaussian mixture model (GMM for dimension transformation into feature space, which is created as a three-dimensional plot for visualization and for further classification using support vector machines. From our experiments we observe that Fisher Kernel usage fits really well for both kinds of signals, with low classification error rates.
Suitability of point kernel dose calculation techniques in brachytherapy treatment planning
Directory of Open Access Journals (Sweden)
Lakshminarayanan Thilagam
2010-01-01
Full Text Available Brachytherapy treatment planning system (TPS is necessary to estimate the dose to target volume and organ at risk (OAR. TPS is always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in applicators. However, most brachytherapy TPS software packages estimate the absorbed dose at a point, taking care of only the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. There are some degrees of uncertainties in dose rate estimations under realistic clinical conditions. In this regard, an attempt is made to explore the suitability of point kernels for brachytherapy dose rate calculations and develop new interactive brachytherapy package, named as BrachyTPS, to suit the clinical conditions. BrachyTPS is an interactive point kernel code package developed to perform independent dose rate calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. The primary aim of this study is to validate the developed point kernel code package integrated with treatment planning computational systems against the Monte Carlo (MC results. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, namely (i Board of Radiation Isotope and Technology (BRIT low dose rate (LDR applicator and (ii Fletcher Green type LDR applicator (iii Fletcher Williamson high dose rate (HDR applicator, are studied to test the accuracy of the software. Dose rates computed using the developed code are compared with the relevant results of the MC simulations. Further, attempts are also made to study the dose rate distribution around the commercially available shielded vaginal applicator set (Nucletron. The percentage deviations of BrachyTPS computed dose rate values from the MC results are observed to be within plus/minus 5
Kernel Affine Projection Algorithms
Directory of Open Access Journals (Sweden)
José C. Príncipe
2008-05-01
Full Text Available The combination of the famed kernel trick and affine projection algorithms (APAs yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS. KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS, and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Kernel Affine Projection Algorithms
Liu, Weifeng; Príncipe, José C.
2008-12-01
The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Huang, Xianglei; Chen, Xiuhong; Soden, Brian; Liu, Xu
2015-04-01
Radiative feedback is normally discussed in terms of Watts per square meter per K, i.e., the change of broadband flux due to the change of certain climate variable in response to 1K change in global-mean surface temperature. However, the radiative feedback has an intrinsic dimension of spectrum and spectral radiative feedback can be defined in terms of Watts per square meter per K per frequency (or per wavelength). A set of all-sky and clear-sky longwave (LW) spectral radiative kernels (SRK) are constructed using a recently developed spectral flux simulator based on the PCRTM (Principal-Component-based Radiative Transfer Model). The LW spectral radiative kernels are validated against the benchmark partial radiative perturbation method. The LW broadband feedbacks derived using this SRK method are consistent with the published results using the broadband radiative kernels. The SRK is then applied to 12 GCMs in CMIP3 archives and 12 GCMs in CMIP5 archives to derive the spectrally resolved Planck, lapse rate, and LW water vapor feedbacks. The inter-model spreads of the spectral lapse-rate feedbacks among the CMIP3 models are noticeably different than those among the CMIP5 models. In contrast, the inter-model spread of spectral LW water vapor feedbacks changes little from the CMIP3 to CMIP5 simulations, when the specific humidity is used as the state variable. Spatially the far-IR band is more responsible for the changes in lapse-rate feedbacks from the CMIP3 to CMIP5 than the window band. When relative humidity (RH) is used as state variable, virtually all GCMs have little broadband RH feedbacks as shown in Held & Shell (2012). However, the RH feedbacks can be significantly non-zero over different LW spectral regions and the spectral details of such RH feedbacks vary significantly from one GCM to the other. Finally an interpretation based on a one-layer atmospheric model is presented to illustrate under what statistical circumstances the linear technique can be applied
ks: Kernel Density Estimation and Kernel Discriminant Analysis for Multivariate Data in R
Directory of Open Access Journals (Sweden)
Tarn Duong
2007-09-01
Full Text Available Kernel smoothing is one of the most widely used non-parametric data smoothing techniques. We introduce a new R package ks for multivariate kernel smoothing. Currently it contains functionality for kernel density estimation and kernel discriminant analysis. It is a comprehensive package for bandwidth matrix selection, implementing a wide range of data-driven diagonal and unconstrained bandwidth selectors.
Contingent kernel density estimation.
Directory of Open Access Journals (Sweden)
Scott Fortmann-Roe
Full Text Available Kernel density estimation is a widely used method for estimating a distribution based on a sample of points drawn from that distribution. Generally, in practice some form of error contaminates the sample of observed points. Such error can be the result of imprecise measurements or observation bias. Often this error is negligible and may be disregarded in analysis. In cases where the error is non-negligible, estimation methods should be adjusted to reduce resulting bias. Several modifications of kernel density estimation have been developed to address specific forms of errors. One form of error that has not yet been addressed is the case where observations are nominally placed at the centers of areas from which the points are assumed to have been drawn, where these areas are of varying sizes. In this scenario, the bias arises because the size of the error can vary among points and some subset of points can be known to have smaller error than another subset or the form of the error may change among points. This paper proposes a "contingent kernel density estimation" technique to address this form of error. This new technique adjusts the standard kernel on a point-by-point basis in an adaptive response to changing structure and magnitude of error. In this paper, equations for our contingent kernel technique are derived, the technique is validated using numerical simulations, and an example using the geographic locations of social networking users is worked to demonstrate the utility of the method.
Institute of Scientific and Technical Information of China (English)
钟志威
2016-01-01
针对稀疏角度投影数据CT图像重建问题,TV-ART算法将图像的梯度稀疏先验知识引入代数重建法( ART)中,对分段平滑的图像具有较好的重建效果。但是,该算法在边界重建时会产生阶梯效应,影响重建质量。因此,本文提出自适应核回归函数结合代数重建法的重建算法( LAKR-ART),不仅在边界重建时不会产生阶梯效应,而且对细节纹理重建具有更好的重建效果。最后对shepp-logan标准CT图像和实际CT头颅图像进行仿真实验,并与ART、TV-ART算法进行比较,实验结果表明本文算法有效。%To the problem of sparse angular projection data of CT image reconstruction, TV-ART algorithm introduces the gradient sparse prior knowledge of image to algebraic reconstruction, and the local smooth image gets a better reconstruction effect. How-ever, the algorithm generates step effect when the borders are reconstructed, affecting the quality of the reconstruction. Therefore, this paper proposes an adaptive kernel regression function combined with Algebraic Reconstruction Technique reconstruction algo-rithm ( LAKR-ART) , it does not produce the step effect on the border reconstruction, and has a better effect to detail reconstruc-tion. Finally we use the shepp-logan CT image and the actual CT image to make the simulation experiment, and compare with ART and TV-ART algorithm. The experimental results show the algorithm is of effectiveness.
Sato, S.; Kawamura, S.
2008-07-01
The alignment sensing and control scheme of the resonant sideband extraction interferometer is still an unsettled issue for the next-generation gravitational wave antennas. The issue is that it is difficult to extract separate error signals for all 12 angular degrees of freedom, which is mainly arising from the complexity of the optical system and cavity 'degeneracy'. We have suggested a new sensing scheme giving reasonably separated signals which is fully compatible with the length sensing scheme. The key of this idea is to resolve the 'degeneracy' of the optical cavities. By choosing an appropriate Gouy phase for the degenerate cavities, alignment error signals with much less admixtures can be extracted.
Model selection in kernel ridge regression
DEFF Research Database (Denmark)
Exterkate, Peter
2013-01-01
Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...
Energy Technology Data Exchange (ETDEWEB)
Hoppe, Sven; Quirbach, Sebastian; Krause, Fabian G.; Benneker, Lorin M. [Inselspital, Berne University Hospital, Department of Orthopaedic Surgery, Berne (Switzerland); Mamisch, Tallal C. [Inselspital, Berne University Hospital, Department of Radiology, Berne (Switzerland); Werlen, Stefan [Clinic Sonnenhof, Department of Radiology, Berne (Switzerland)
2012-09-15
To demonstrate the potential benefits of biochemical axial T2* mapping of intervertebral discs (IVDs) regarding the detection and grading of early stages of degenerative disc disease using 1.5-Tesla magnetic resonance imaging (MRI) in a clinical setting. Ninety-three patients suffering from lumbar spine problems were examined using standard MRI protocols including an axial T2* mapping protocol. All discs were classified morphologically and grouped as ''healthy'' or ''abnormal''. Differences between groups were analysed regarding to the specific T2* pattern at different regions of interest (ROIs). Healthy intervertebral discs revealed a distinct cross-sectional T2* value profile: T2* values were significantly lower in the annulus fibrosus compared with the nucleus pulposus (P = 0.01). In abnormal IVDs, T2* values were significantly lower, especially towards the centre of the disc representing the expected decreased water content of the nucleus (P = 0.01). In herniated discs, ROIs within the nucleus pulposus and ROIs covering the annulus fibrosus showed decreased T2* values. Axial T2* mapping is effective to detect early stages of degenerative disc disease. There is a potential benefit of axial T2* mapping as a diagnostic tool, allowing the quantitative assessment of intervertebral disc degeneration. circle Axial T2* mapping effective in detecting early degenerative disc disease. (orig.)
Chen, Anhui; Wang, Yulong; Shao, Ying; Huang, Bo
2017-01-01
Cordyceps militaris has been used in traditional Chinese medicine for many years, but its frequent degeneration during continuous maintenance in culture can lead to substantial commercial losses. In this study, a degenerated strain of C. militaris was obtained by subculturing a wild-type strain through 10 successive subcultures. The relative abundance of the 2 mating types seems to be out of balance in the degenerated strain. By cross-mating 4 single-ascospore isolates (2 for MAT 1-1 and 2 for MAT 1-2) from the degenerated strain, we were able to restore fruiting body production to wild-type levels. The rejuvenated strain not only produced well-developed fruiting bodies but also accumulated more cordycepin and adenosine than either the original wild-type strain or the degenerated strain. These new characteristics remained stable after 4 successive transfers, which indicates that the method used to rejuvenate the degenerated strain in this study is an effective approach.
Nonlinear projection trick in kernel methods: an alternative to the kernel trick.
Kwak, Nojun
2013-12-01
In kernel methods such as kernel principal component analysis (PCA) and support vector machines, the so called kernel trick is used to avoid direct calculations in a high (virtually infinite) dimensional kernel space. In this brief, based on the fact that the effective dimensionality of a kernel space is less than the number of training samples, we propose an alternative to the kernel trick that explicitly maps the input data into a reduced dimensional kernel space. This is easily obtained by the eigenvalue decomposition of the kernel matrix. The proposed method is named as the nonlinear projection trick in contrast to the kernel trick. With this technique, the applicability of the kernel methods is widened to arbitrary algorithms that do not use the dot product. The equivalence between the kernel trick and the nonlinear projection trick is shown for several conventional kernel methods. In addition, we extend PCA-L1, which uses L1-norm instead of L2-norm (or dot product), into a kernel version and show the effectiveness of the proposed approach.
Energy Technology Data Exchange (ETDEWEB)
Oda, Seitaro, E-mail: seisei0430@nifty.com [Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University, 1-1-1 Honjyo, Kumamoto 860-8556 (Japan); Utsunomiya, Daisuke, E-mail: utsunomi@kumamoto-u.ac.jp [Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University, 1-1-1 Honjyo, Kumamoto 860-8556 (Japan); Funama, Yoshinori, E-mail: funama@kumamoto-u.ac.jp [Department of Medical Physics, Faculty of Life Sciences, Kumamoto University, 1-1-1 Honjyo, Kumamoto 860-8556 (Japan); Takaoka, Hiroko, E-mail: hiroko_takayoka@yahoo.co.jp [Department of Diagnostic Radiology, Kumamoto Chuo Hospital, 1-5-1 Tainoshima, Kumamoto 862-0965 (Japan); Katahira, Kazuhiro, E-mail: yy26kk@yahoo.co.jp [Department of Diagnostic Radiology, Kumamoto Chuo Hospital, 1-5-1 Tainoshima, Kumamoto 862-0965 (Japan); Honda, Keiichi, E-mail: k-book@osu.bbiq.jp [Department of Diagnostic Radiology, Kumamoto Chuo Hospital, 1-5-1 Tainoshima, Kumamoto 862-0965 (Japan); Noda, Katsuo, E-mail: k-noda@kumachu.gr.jp [Department of Cardiology, Kumamoto Chuo Hospital, 1-5-1 Tainoshima, Kumamoto 862-0965 (Japan); Oshima, Shuichi, E-mail: shuoshima@e-mail.jp [Department of Cardiology, Kumamoto Chuo Hospital, 1-5-1 Tainoshima, Kumamoto 862-0965 (Japan); Yamashita, Yasuyuki, E-mail: yama@kumamoto-u.ac.jp [Department of Diagnostic Radiology, Faculty of Life Sciences, Kumamoto University, 1-1-1 Honjyo, Kumamoto 860-8556 (Japan)
2013-02-15
Objectives: To investigate the diagnostic performance of 256-slice cardiac CT for the evaluation of the in-stent lumen by using a hybrid iterative reconstruction (HIR) algorithm combined with a high-resolution kernel. Methods: This study included 28 patients with 28 stents who underwent cardiac CT. Three different reconstruction images were obtained with: (1) a standard filtered back projection (FBP) algorithm with a standard cardiac kernel (CB), (2) an FBP algorithm with a high-resolution cardiac kernel (CD), and (3) an HIR algorithm with the CD kernel. We measured image noise and kurtosis and used receiver operating characteristics analysis to evaluate observer performance in the detection of in-stent stenosis. Results: Image noise with FBP plus the CD kernel (80.2 ± 15.5 HU) was significantly higher than with FBP plus the CB kernel (28.8 ± 4.6 HU) and HIR plus the CD kernel (36.1 ± 6.4 HU). There was no significant difference in the image noise between FBP plus the CB kernel and HIR plus the CD kernel. Kurtosis was significantly better with the CD- than the CB kernel. The kurtosis values obtained with the CD kernel were not significantly different between the FBP- and HIR reconstruction algorithms. The areas under the receiver operating characteristics curves with HIR plus the CD kernel were significantly higher than with FBP plus the CB- or the CD kernel. The difference between FBP plus the CB- or the CD kernel was not significant. The average sensitivity, specificity, and positive and negative predictive value for the detection of in-stent stenosis were 83.3, 50.0, 33.3, and 91.6% for FBP plus the CB kernel, 100, 29.6, 40.0, and 100% for FBP plus the CD kernel, and 100, 54.5, 40.0, and 100% for HIR plus the CD kernel. Conclusions: The HIR algorithm combined with the high-resolution kernel significantly improved diagnostic performance in the detection of in-stent stenosis.
On Degenerate Partial Differential Equations
Chen, Gui-Qiang G.
2010-01-01
Some of recent developments, including recent results, ideas, techniques, and approaches, in the study of degenerate partial differential equations are surveyed and analyzed. Several examples of nonlinear degenerate, even mixed, partial differential equations, are presented, which arise naturally in some longstanding, fundamental problems in fluid mechanics and differential geometry. The solution to these fundamental problems greatly requires a deep understanding of nonlinear degenerate parti...
Nonlinear Deep Kernel Learning for Image Annotation.
Jiu, Mingyuan; Sahbi, Hichem
2017-02-08
Multiple kernel learning (MKL) is a widely used technique for kernel design. Its principle consists in learning, for a given support vector classifier, the most suitable convex (or sparse) linear combination of standard elementary kernels. However, these combinations are shallow and often powerless to capture the actual similarity between highly semantic data, especially for challenging classification tasks such as image annotation. In this paper, we redefine multiple kernels using deep multi-layer networks. In this new contribution, a deep multiple kernel is recursively defined as a multi-layered combination of nonlinear activation functions, each one involves a combination of several elementary or intermediate kernels, and results into a positive semi-definite deep kernel. We propose four different frameworks in order to learn the weights of these networks: supervised, unsupervised, kernel-based semisupervised and Laplacian-based semi-supervised. When plugged into support vector machines (SVMs), the resulting deep kernel networks show clear gain, compared to several shallow kernels for the task of image annotation. Extensive experiments and analysis on the challenging ImageCLEF photo annotation benchmark, the COREL5k database and the Banana dataset validate the effectiveness of the proposed method.
Extension of Wirtinger's Calculus in Reproducing Kernel Hilbert Spaces and the Complex Kernel LMS
Bouboulis, Pantelis
2010-01-01
Over the last decade, kernel methods for nonlinear processing have successfully been used in the machine learning community. The primary mathematical tool employed in these methods is the notion of the Reproducing Kernel Hilbert Space. However, so far, the emphasis has been on batch techniques. It is only recently, that online techniques have been considered in the context of adaptive signal processing tasks. Moreover, these efforts have only been focussed on and real valued data sequences. To the best of our knowledge, no kernel-based strategy has been developed, so far, that is able to deal with complex valued signals. In this paper, we present a general framework to attack the problem of adaptive filtering of complex signals, using either real reproducing kernels, taking advantage of a technique called \\textit{complexification} of real RKHSs, or complex reproducing kernels, highlighting the use of the complex gaussian kernel. In order to derive gradients of operators that need to be defined on the associat...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... analysis, looking at the class of subsampled realised kernels and we derive the limit theory for this class of estimators. We find that subsampling is highly advantageous for estimators based on discontinuous kernels, such as the truncated kernel. For kinked kernels, such as the Bartlett kernel, we show...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...
A new Mercer sigmoid kernel for clinical data classification.
Carrington, André M; Fieguth, Paul W; Chen, Helen H
2014-01-01
In classification with Support Vector Machines, only Mercer kernels, i.e. valid kernels, such as the Gaussian RBF kernel, are widely accepted and thus suitable for clinical data. Practitioners would also like to use the sigmoid kernel, a non-Mercer kernel, but its range of validity is difficult to determine, and even within range its validity is in dispute. Despite these shortcomings the sigmoid kernel is used by some, and two kernels in the literature attempt to emulate and improve upon it. We propose the first Mercer sigmoid kernel, that is therefore trustworthy for the classification of clinical data. We show the similarity between the Mercer sigmoid kernel and the sigmoid kernel and, in the process, identify a normalization technique that improves the classification accuracy of the latter. The Mercer sigmoid kernel achieves the best mean accuracy on three clinical data sets, detecting melanoma in skin lesions better than the most popular kernels; while with non-clinical data sets it has no significant difference in median accuracy as compared with the Gaussian RBF kernel. It consistently classifies some points correctly that the Gaussian RBF kernel does not and vice versa.
Kernel methods and minimum contrast estimators for empirical deconvolution
Delaigle, Aurore
2010-01-01
We survey classical kernel methods for providing nonparametric solutions to problems involving measurement error. In particular we outline kernel-based methodology in this setting, and discuss its basic properties. Then we point to close connections that exist between kernel methods and much newer approaches based on minimum contrast techniques. The connections are through use of the sinc kernel for kernel-based inference. This `infinite order' kernel is not often used explicitly for kernel-based deconvolution, although it has received attention in more conventional problems where measurement error is not an issue. We show that in a comparison between kernel methods for density deconvolution, and their counterparts based on minimum contrast, the two approaches give identical results on a grid which becomes increasingly fine as the bandwidth decreases. In consequence, the main numerical differences between these two techniques are arguably the result of different approaches to choosing smoothing parameters.
Efficient classification for additive kernel SVMs.
Maji, Subhransu; Berg, Alexander C; Malik, Jitendra
2013-01-01
We show that a class of nonlinear kernel SVMs admits approximate classifiers with runtime and memory complexity that is independent of the number of support vectors. This class of kernels, which we refer to as additive kernels, includes widely used kernels for histogram-based image comparison like intersection and chi-squared kernels. Additive kernel SVMs can offer significant improvements in accuracy over linear SVMs on a wide variety of tasks while having the same runtime, making them practical for large-scale recognition or real-time detection tasks. We present experiments on a variety of datasets, including the INRIA person, Daimler-Chrysler pedestrians, UIUC Cars, Caltech-101, MNIST, and USPS digits, to demonstrate the effectiveness of our method for efficient evaluation of SVMs with additive kernels. Since its introduction, our method has become integral to various state-of-the-art systems for PASCAL VOC object detection/image classification, ImageNet Challenge, TRECVID, etc. The techniques we propose can also be applied to settings where evaluation of weighted additive kernels is required, which include kernelized versions of PCA, LDA, regression, k-means, as well as speeding up the inner loop of SVM classifier training algorithms.
Kernel approximation for solving few-body integral equations
Christie, I.; Eyre, D.
1986-06-01
This paper investigates an approximate method for solving integral equations that arise in few-body problems. The method is to replace the kernel by a degenerate kernel defined on a finite dimensional subspace of piecewise Lagrange polynomials. Numerical accuracy of the method is tested by solving the two-body Lippmann-Schwinger equation with non-separable potentials, and the three-body Amado-Lovelace equation with separable two-body potentials.
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2016-01-01
To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...
A kernel-based approach for biomedical named entity recognition.
Patra, Rakesh; Saha, Sujan Kumar
2013-01-01
Support vector machine (SVM) is one of the popular machine learning techniques used in various text processing tasks including named entity recognition (NER). The performance of the SVM classifier largely depends on the appropriateness of the kernel function. In the last few years a number of task-specific kernel functions have been proposed and used in various text processing tasks, for example, string kernel, graph kernel, tree kernel and so on. So far very few efforts have been devoted to the development of NER task specific kernel. In the literature we found that the tree kernel has been used in NER task only for entity boundary detection or reannotation. The conventional tree kernel is unable to execute the complete NER task on its own. In this paper we have proposed a kernel function, motivated by the tree kernel, which is able to perform the complete NER task. To examine the effectiveness of the proposed kernel, we have applied the kernel function on the openly available JNLPBA 2004 data. Our kernel executes the complete NER task and achieves reasonable accuracy.
2008-08-01
Berg et al., 1984] has been used in a machine learning context by Cuturi and Vert [2005]. Definition 26 Let (X ,+) be a semigroup .2 A function ϕ : X...R is called pd (in the semigroup sense) if k : X × X → R, defined as k(x, y) = ϕ(x + y), is a pd kernel. Likewise, ϕ is called nd if k is a nd...kernel. Accordingly, these are called semigroup kernels. 7.3 Jensen-Shannon and Tsallis kernels The basic result that allows deriving pd kernels based on
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
Nowicki, Dimitri; Siegelmann, Hava
2010-06-11
This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.
Directory of Open Access Journals (Sweden)
Dimitri Nowicki
Full Text Available This paper introduces a new model of associative memory, capable of both binary and continuous-valued inputs. Based on kernel theory, the memory model is on one hand a generalization of Radial Basis Function networks and, on the other, is in feature space, analogous to a Hopfield network. Attractors can be added, deleted, and updated on-line simply, without harming existing memories, and the number of attractors is independent of input dimension. Input vectors do not have to adhere to a fixed or bounded dimensionality; they can increase and decrease it without relearning previous memories. A memory consolidation process enables the network to generalize concepts and form clusters of input data, which outperforms many unsupervised clustering techniques; this process is demonstrated on handwritten digits from MNIST. Another process, reminiscent of memory reconsolidation is introduced, in which existing memories are refreshed and tuned with new inputs; this process is demonstrated on series of morphed faces.
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2016-02-25
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
Regularization in kernel learning
Mendelson, Shahar; 10.1214/09-AOS728
2010-01-01
Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning scenario taking place in the corresponding reproducing kernel Hilbert space (RKHS). The main novelty in the analysis is a proof that one can use a regularization term that grows significantly slower than the standard quadratic growth in the RKHS norm.
Energy Technology Data Exchange (ETDEWEB)
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Directory of Open Access Journals (Sweden)
R. Lakshmi
2014-06-01
Full Text Available A kernel $J$ of a digraph $D$ is an independent set of vertices of $D$ such that for every vertex $w,in,V(D,setminus,J$ there exists an arc from $w$ to a vertex in $J.$ In this paper, among other results, a characterization of $2$-regular circulant digraph having a kernel is obtained. This characterization is a partial solution to the following problem: Characterize circulant digraphs which have kernels; it appeared in the book {it Digraphs - theory, algorithms and applications}, Second Edition, Springer-Verlag, 2009, by J. Bang-Jensen and G. Gutin.
Gärtner, Thomas
2009-01-01
This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by
Locally linear approximation for Kernel methods : the Railway Kernel
Muñoz, Alberto; González, Javier
2008-01-01
In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capability of the pr...
Kroah-Hartman, Greg
2009-01-01
Linux Kernel in a Nutshell covers the entire range of kernel tasks, starting with downloading the source and making sure that the kernel is in sync with the versions of the tools you need. In addition to configuration and installation steps, the book offers reference material and discussions of related topics such as control of kernel options at runtime.
Motai, Yuichi
2015-01-01
Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include
Mixture Density Mercer Kernels
National Aeronautics and Space Administration — We present a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian mixture...
Analog forecasting with dynamics-adapted kernels
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Johno, Hisashi; Nakamoto, Kazunori; Saigo, Tatsuhiko
2015-01-01
Kernel Bayes' rule has been proposed as a nonparametric kernel-based method to realize Bayesian inference in reproducing kernel Hilbert spaces. However, we demonstrate both theoretically and experimentally that the prediction result by kernel Bayes' rule is in some cases unnatural. We consider that this phenomenon is in part due to the fact that the assumptions in kernel Bayes' rule do not hold in general.
Linearized Kernel Dictionary Learning
Golts, Alona; Elad, Michael
2016-06-01
In this paper we present a new approach of incorporating kernels into dictionary learning. The kernel K-SVD algorithm (KKSVD), which has been introduced recently, shows an improvement in classification performance, with relation to its linear counterpart K-SVD. However, this algorithm requires the storage and handling of a very large kernel matrix, which leads to high computational cost, while also limiting its use to setups with small number of training examples. We address these problems by combining two ideas: first we approximate the kernel matrix using a cleverly sampled subset of its columns using the Nystr\\"{o}m method; secondly, as we wish to avoid using this matrix altogether, we decompose it by SVD to form new "virtual samples," on which any linear dictionary learning can be employed. Our method, termed "Linearized Kernel Dictionary Learning" (LKDL) can be seamlessly applied as a pre-processing stage on top of any efficient off-the-shelf dictionary learning scheme, effectively "kernelizing" it. We demonstrate the effectiveness of our method on several tasks of both supervised and unsupervised classification and show the efficiency of the proposed scheme, its easy integration and performance boosting properties.
Convolution kernel design and efficient algorithm for sampling density correction.
Johnson, Kenneth O; Pipe, James G
2009-02-01
Sampling density compensation is an important step in non-cartesian image reconstruction. One of the common techniques to determine weights that compensate for differences in sampling density involves a convolution. A new convolution kernel is designed for sampling density attempting to minimize the error in a fully reconstructed image. The resulting weights obtained using this new kernel are compared with various previous methods, showing a reduction in reconstruction error. A computationally efficient algorithm is also presented that facilitates the calculation of the convolution of finite kernels. Both the kernel and the algorithm are extended to 3D. Copyright 2009 Wiley-Liss, Inc.
Kernels for Vector-Valued Functions: a Review
Alvarez, Mauricio A; Lawrence, Neil D
2011-01-01
Kernel methods are among the most popular techniques in machine learning. From a frequentist/discriminative perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a Bayesian/generative perspective they are the key in the context of Gaussian processes, where the kernel function is also known as the covariance function. Traditionally, kernel methods have been used in supervised learning problem with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. In this paper, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional method...
Directory of Open Access Journals (Sweden)
A. Zakery
2005-03-01
Full Text Available Chalcogenide glasses such as arseic sulfide(As2 S3 have attracted attention for applications such as all-optical switching in high speed communication. This is due to their high non-linear refractive-index. Z-scan and the Degenerate four wave mixing (DFWM techniques can be used to measure the non-linear refractive index n 2 and the two photon absorption coefficient β . A simaltanous closed-aperture and open-aperture Z-scan experimental set up was used to obtain the experimental results. The results were then fitted into a theoretical formula. Values of n2=3×10-17m2/W and β= 0.29 cm/GW have been obtained. DFWM measurements were made on arsenic sulfide films. A Box-cars forward geometry was used in these measurements. Experimental results based on non-phase matched signals were again fitted into a theoretical formula and a value of n2 =3.9×10-17 m2 /W was obtained .
A Kernel-Based Nonlinear Representor with Application to Eigenface Classification
Institute of Scientific and Technical Information of China (English)
ZHANG Jing; LIU Ben-yong; TAN Hao
2004-01-01
This paper presents a classifier named kernel-based nonlinear representor (KNR) for optimal representation of pattern features. Adopting the Gaussian kernel, with the kernel width adaptively estimated by a simple technique, it is applied to eigenface classification. Experimental results on the ORL face database show that it improves performance by around 6 points, in classification rate, over the Euclidean distance classifier.
Covariant derivative expansion of the heat kernel
Energy Technology Data Exchange (ETDEWEB)
Salcedo, L.L. [Universidad de Granada, Departamento de Fisica Moderna, Granada (Spain)
2004-11-01
Using the technique of labeled operators, compact explicit expressions are given for all traced heat kernel coefficients containing zero, two, four and six covariant derivatives, and for diagonal coefficients with zero, two and four derivatives. The results apply to boundaryless flat space-times and arbitrary non-Abelian scalar and gauge background fields. (orig.)
Multidimensional kernel estimation
Milosevic, Vukasin
2015-01-01
Kernel estimation is one of the non-parametric methods used for estimation of probability density function. Its first ROOT implementation, as part of RooFit package, has one major issue, its evaluation time is extremely slow making in almost unusable. The goal of this project was to create a new class (TKNDTree) which will follow the original idea of kernel estimation, greatly improve the evaluation time (using the TKTree class for storing the data and creating different user-controlled modes of evaluation) and add the interpolation option, for 2D case, with the help of the new Delaunnay2D class.
Learning Rates for -Regularized Kernel Classifiers
Directory of Open Access Journals (Sweden)
Hongzhi Tong
2013-01-01
Full Text Available We consider a family of classification algorithms generated from a regularization kernel scheme associated with -regularizer and convex loss function. Our main purpose is to provide an explicit convergence rate for the excess misclassification error of the produced classifiers. The error decomposition includes approximation error, hypothesis error, and sample error. We apply some novel techniques to estimate the hypothesis error and sample error. Learning rates are eventually derived under some assumptions on the kernel, the input space, the marginal distribution, and the approximation error.
for palm kernel oil extraction
African Journals Online (AJOL)
user
OEE), ... designed (CRD) experimental approach with 4 factor levels and 2 replications was used to determine the effect of kernel .... palm kernels in either a continuous or batch mode ... are fed through the hopper; the screw conveys, crushes,.
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads
2011-01-01
In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...
Adaptive metric kernel regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
2000-01-01
regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...
Adaptive Metric Kernel Regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...
Institute of Scientific and Technical Information of China (English)
丁立军; 王喜明; 郝一男; 陈刚
2013-01-01
为优化文冠果种仁制取生物柴油的工艺,基于中心复合(central composite desig)试验设计方法,采用了文冠果种仁的提取和生物柴油的合成一步完成的工艺.进行了以生物柴油得率为响应值,提取/反应温度、石油醚用量、甲醇用量和NaOH用量为自变量的优化试验,将试验数据拟合建立了数学模型,该模型能够较准确的预测文冠种仁一步法合成生物柴油的得率.结果表明,优化工艺为：提取/反应温度为77℃,石油醚用量为6：1(体积质量比),甲醇用量为文冠果种仁的12%(体积质量比),NaOH 用量为文冠果种仁的0.3%(质量比),此时生物柴油得率为65.44%.%The biodiesel was synthesized after the oil extraction and pretreatment, which was relatively complex in production and separation process. The separation cost was very large. That will be greatly reduced when the biodiesel was synthesized from the crude oil. The single-step technique was studied on the synthesis of biodiesel from Xanthoceras sorbifolia kernel in this study, using petroleum ether as extraction agent and methanol as synthesis agent. The oil extraction and ester exchange reaction was conducted under water bath heating and magnetic stirring conditions with sodium hydroxide as the catalyst. The single-step technique was investigated to accomplish oil extraction and biodiesel synthesis from Xanthoceras sorbifolia bunge kernel using central composite design. The predictive model of polynomial quadratic equation was established with Design Expert software. In the model, temperature, petroleum ether amount, methanol amount and sodium hydroxide amount were independent variables and biodiesel yield was response value. The results showed that the influencing degree of the four factors on biodiesel yield was petroleum ether amount > methanol amount > the temperature of extraction and reaction>sodium hydroxid amount. The influencing degree of the interaction between the factors
Viscosity kernel of molecular fluids
DEFF Research Database (Denmark)
Puscasu, Ruslan; Todd, Billy; Daivis, Peter
2010-01-01
, temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3
Multiple Kernel Point Set Registration.
Nguyen, Thanh Minh; Wu, Q M Jonathan
2016-06-01
The finite Gaussian mixture model with kernel correlation is a flexible tool that has recently received attention for point set registration. While there are many algorithms for point set registration presented in the literature, an important issue arising from these studies concerns the mapping of data with nonlinear relationships and the ability to select a suitable kernel. Kernel selection is crucial for effective point set registration. We focus here on multiple kernel point set registration. We make several contributions in this paper. First, each observation is modeled using the Student's t-distribution, which is heavily tailed and more robust than the Gaussian distribution. Second, by automatically adjusting the kernel weights, the proposed method allows us to prune the ineffective kernels. This makes the choice of kernels less crucial. After parameter learning, the kernel saliencies of the irrelevant kernels go to zero. Thus, the choice of kernels is less crucial and it is easy to include other kinds of kernels. Finally, we show empirically that our model outperforms state-of-the-art methods recently proposed in the literature.
A framework for optimal kernel-based manifold embedding of medical image data.
Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma
2015-04-01
Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images.
On summable form of Poisson-Mehler kernel for big q-Hermite and Al-Salam-Chihara polynomials
Szabłowski, Paweł J
2010-01-01
Using special technique of expansion of the ratio of densities we obtain simple closed forms for certain kernels analogous to Poisson-Mehler, also asymmetric, for certain values of parameters, proving positivity of those kernels.
[Hepatolenticular degeneration].
Zudenigo, D; Relja, M
1990-01-01
Hepatolenticular degeneration (Wilson's disease) is a hereditary disease in which metabolic disorder of copper leads to its accumulation in the liver, brain, cornea and kidneys with consequent pathologic changes in those organs. Hereditary mechanism of the disease is autosomal recessive with prevalence of 30-100 per 1,000,000 inhabitants. Etiology of this disease is not yet explained. There are two hypotheses. The first one is that it is the disorder of ceruloplasmine metabolism caused by insufficient synthesis of normal ceruloplasmine, or synthesis of functionally abnormal ceruloplasmine. The second one is: the block of copper biliar excretion which is the consequence of the liver lysosomes functional defect. Pathogenetic mechanism of disease is firstly long-term accumulation of copper in the liver, and later, when the liver depo is full, its releasing in circulation and accumulation in the brain, cornea, kidneys and bones, which causes adequate pathologic changes. Toxic activity of copper is the consequence of its activity on enzymes, particularly on those with -SH group. There are two basic clinical forms of the disease: liver disease or neurologic disease. Before puberty the liver damage is more frequent, while in adolescents and young adults neurologic form of the disease is usual. The liver disease is nonspecific and characterized by symptoms of cirrhosis and chronic aggressive hepatitis. The only specificity is hemolytic anemia which, in combination with previous symptoms, is important for diagnosis of the disease. Neurologic symptoms are the most frequent consequence of pathologic changes in the basal ganglia. In our patients the most frequent symptoms were tremor (63%); dysarthria, choreoathetosis and rigor (38%); ataxia and mental disorders (31%); dysphagia and dystonia (12%), diplopia, hypersalivation, nystagmus and Babinski's sign (6%). Among pathologic changes in other tissues and organs the most important is the finding of Kayser-Fleischer ring in the
Integrating the Gradient of the Thin Wire Kernel
Champagne, Nathan J.; Wilton, Donald R.
2008-01-01
A formulation for integrating the gradient of the thin wire kernel is presented. This approach employs a new expression for the gradient of the thin wire kernel derived from a recent technique for numerically evaluating the exact thin wire kernel. This approach should provide essentially arbitrary accuracy and may be used with higher-order elements and basis functions using the procedure described in [4].When the source and observation points are close, the potential integrals over wire segments involving the wire kernel are split into parts to handle the singular behavior of the integrand [1]. The singularity characteristics of the gradient of the wire kernel are different than those of the wire kernel, and the axial and radial components have different singularities. The characteristics of the gradient of the wire kernel are discussed in [2]. To evaluate the near electric and magnetic fields of a wire, the integration of the gradient of the wire kernel needs to be calculated over the source wire. Since the vector bases for current have constant direction on linear wire segments, these integrals reduce to integrals of the form
Wilson Dslash Kernel From Lattice QCD Optimization
Energy Technology Data Exchange (ETDEWEB)
Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
Kernel Density Estimation, Kernel Methods, and Fast Learning in Large Data Sets.
Wang, Shitong; Wang, Jun; Chung, Fu-lai
2014-01-01
Kernel methods such as the standard support vector machine and support vector regression trainings take O(N(3)) time and O(N(2)) space complexities in their naïve implementations, where N is the training set size. It is thus computationally infeasible in applying them to large data sets, and a replacement of the naive method for finding the quadratic programming (QP) solutions is highly desirable. By observing that many kernel methods can be linked up with kernel density estimate (KDE) which can be efficiently implemented by some approximation techniques, a new learning method called fast KDE (FastKDE) is proposed to scale up kernel methods. It is based on establishing a connection between KDE and the QP problems formulated for kernel methods using an entropy-based integrated-squared-error criterion. As a result, FastKDE approximation methods can be applied to solve these QP problems. In this paper, the latest advance in fast data reduction via KDE is exploited. With just a simple sampling strategy, the resulted FastKDE method can be used to scale up various kernel methods with a theoretical guarantee that their performance does not degrade a lot. It has a time complexity of O(m(3)) where m is the number of the data points sampled from the training set. Experiments on different benchmarking data sets demonstrate that the proposed method has comparable performance with the state-of-art method and it is effective for a wide range of kernel methods to achieve fast learning in large data sets.
Testing Monotonicity of Pricing Kernels
Timofeev, Roman
2007-01-01
In this master thesis a mechanism to test mononicity of empirical pricing kernels (EPK) is presented. By testing monotonicity of pricing kernel we can determine whether utility function is concave or not. Strictly decreasing pricing kernel corresponds to concave utility function while non-decreasing EPK means that utility function contains some non-concave regions. Risk averse behavior is usually described by concave utility function and considered to be a cornerstone of classical behavioral ...
7 CFR 51.1415 - Inedible kernels.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Inedible kernels. 51.1415 Section 51.1415 Agriculture... Standards for Grades of Pecans in the Shell 1 Definitions § 51.1415 Inedible kernels. Inedible kernels means that the kernel or pieces of kernels are rancid, moldy, decayed, injured by insects or...
7 CFR 981.8 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel,...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible....
7 CFR 981.408 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored...
GPU Acceleration of Image Convolution using Spatially-varying Kernel
Hartung, Steven; Shukla, Hemant; Miller, J. Patrick; Pennypacker, Carlton
2012-01-01
Image subtraction in astronomy is a tool for transient object discovery such as asteroids, extra-solar planets and supernovae. To match point spread functions (PSFs) between images of the same field taken at different times a convolution technique is used. Particularly suitable for large-scale images is a computationally intensive spatially-varying kernel. The underlying algorithm is inherently massively parallel due to unique kernel generation at every pixel location. The spatially-varying k...
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
... macular degeneration Overview By Mayo Clinic Staff Wet macular degeneration is a chronic eye disease that causes blurred vision or a blind spot in your visual field. It's generally caused by abnormal blood vessels that leak fluid or blood into ... macular degeneration is one of two types of age-related ...
Kernel Phase and Kernel Amplitude in Fizeau Imaging
Pope, Benjamin J S
2016-01-01
Kernel phase interferometry is an approach to high angular resolution imaging which enhances the performance of speckle imaging with adaptive optics. Kernel phases are self-calibrating observables that generalize the idea of closure phases from non-redundant arrays to telescopes with arbitrarily shaped pupils, by considering a matrix-based approximation to the diffraction problem. In this paper I discuss the recent history of kernel phase, in particular in the matrix-based study of sparse arrays, and propose an analogous generalization of the closure amplitude to kernel amplitudes. This new approach can self-calibrate throughput and scintillation errors in optical imaging, which extends the power of kernel phase-like methods to symmetric targets where amplitude and not phase calibration can be a significant limitation, and will enable further developments in high angular resolution astronomy.
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch;
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically...
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Graph kernels between point clouds
Bach, Francis
2007-01-01
Point clouds are sets of points in two or three dimensions. Most kernel methods for learning on sets of points have not yet dealt with the specific geometrical invariances and practical constraints associated with point clouds in computer vision and graphics. In this paper, we present extensions of graph kernels for point clouds, which allow to use kernel methods for such ob jects as shapes, line drawings, or any three-dimensional point clouds. In order to design rich and numerically efficient kernels with as few free parameters as possible, we use kernels between covariance matrices and their factorizations on graphical models. We derive polynomial time dynamic programming recursions and present applications to recognition of handwritten digits and Chinese characters from few training examples.
Kernel Generalized Noise Clustering Algorithm
Institute of Scientific and Technical Information of China (English)
WU Xiao-hong; ZHOU Jian-jiang
2007-01-01
To deal with the nonlinear separable problem, the generalized noise clustering (GNC) algorithm is extended to a kernel generalized noise clustering (KGNC) model. Different from the fuzzy c-means (FCM) model and the GNC model which are based on Euclidean distance, the presented model is based on kernel-induced distance by using kernel method. By kernel method the input data are nonlinearly and implicitly mapped into a high-dimensional feature space, where the nonlinear pattern appears linear and the GNC algorithm is performed. It is unnecessary to calculate in high-dimensional feature space because the kernel function can do itjust in input space. The effectiveness of the proposed algorithm is verified by experiments on three data sets. It is concluded that the KGNC algorithm has better clustering accuracy than FCM and GNC in clustering data sets containing noisy data.
Palmprint Recognition by Applying Wavelet-Based Kernel PCA
Institute of Scientific and Technical Information of China (English)
Murat Ekinci; Murat Aykut
2008-01-01
This paper presents a wavelet-based kernel Principal Component Analysis (PCA) method by integrating the Daubechies wavelet representation of palm images and the kernel PCA method for palmprint recognition. Kernel PCA is a technique for nonlinear dimension reduction of data with an underlying nonlinear spatial structure. The intensity values of the palmprint image are first normalized by using mean and standard deviation. The palmprint is then transformed into the wavelet domain to decompose palm images and the lowest resolution subband coefficients are chosen for palm representation.The kernel PCA method is then applied to extract non-linear features from the subband coefficients. Finally, similarity measurement is accomplished by using weighted Euclidean linear distance-based nearest neighbor classifier. Experimental results on PolyU Palmprint Databases demonstrate that the proposed approach achieves highly competitive performance with respect to the published palmprint recognition approaches.
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
of PCA and related techniques. An interesting dilemma in reduction of dimensionality of data is the desire to obtain simplicity for better understanding, visualization and interpretation of the data on the one hand, and the desire to retain sufficient detail for adequate representation on the other hand......Based on work by Pearson in 1901, Hotelling in 1933 introduced principal component analysis (PCA). PCA is often used for general feature generation and linear orthogonalization or compression by dimensionality reduction of correlated multivariate data, see Jolliffe for a comprehensive description...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
Analog Forecasting with Dynamics-Adapted Kernels
Zhao, Zhizhen
2014-01-01
Analog forecasting is a non-parametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from state-space reconstruction for dynamical systems and kernel methods developed in harmonic analysis and machine learning. The first improvement is to augment the dimension of the initial data using Takens' delay-coordinate maps to recover information in the initial data lost through partial observations. Then, instead of using Euclidean distances between the states, weighted ensembles of analogs are constructed according to similarity kernels in delay-coordinate space, featuring an explicit dependence on the dynamical vector field generating the data. The eigenvalues and eigenfunctions ...
Multiple Crop Classification Using Various Support Vector Machine Kernel Functions
Directory of Open Access Journals (Sweden)
Rupali R. Surase
2015-01-01
Full Text Available This study was carried out with techniques of Remote Sensing (RS based crop discrimination and area estimation with single date approach. Several kernel functions are employed and compared in this study for mapping the input space with including linear, sigmoid, and polynomial and Radial Basis Function (RBF. The present study highlights the advantages of Remote Sensing (RS and Geographic Information System (GIS techniques for analyzing the land use/land cover mapping for Aurangabad region of Maharashtra, India. Single date, cloud free IRS-Resourcesat-1 LISS-III data was used for further classification on training set for supervised classification. ENVI 4.4 is used for image analysis and interpretation. The experimental tests show that system is achieved 94.82% using SVM with kernel functions including Polynomial kernel function compared with Radial Basis Function, Sigmoid and linear kernel. The Overall Accuracy (OA to up to 5.17% in comparison to using sigmoid kernel function, and up to 3.45% in comparison to a 3rd degree polynomial kernel function and RBF with 200 as a penalty parameter.
Bruemmer, David J.
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Modified gravitational instability of degenerate and non-degenerate dusty plasma
Jain, Shweta; Sharma, Prerana
2016-09-01
The gravitational instability of strongly coupled dusty plasma (SCDP) is studied considering degenerate and non-degenerate dusty plasma situations. The SCDP system is assumed to be composed of the electrons, ions, neutrals, and strongly coupled dust grains. First, in the high density regime, due to small interparticle distance, the electrons are considered degenerate, whereas the neutrals, dust grains, and ions are treated non-degenerate. In this case, the dynamics of inertialess electrons are managed by Fermi pressure and Bohm potential, while the inertialess ions are by only thermal pressure. Second, in the non-degenerate regime, both the electrons and ions are governed by the thermal pressure. The generalized hydrodynamic model and the normal mode analysis technique are employed to examine the low frequency waves and gravitational instability in both degenerate and non-degenerate cases. The general dispersion relation is discussed for a characteristic timescale which provides two regimes of frequency, i.e., hydrodynamic regime and kinetic regime. Analytical solutions reveal that the collisions reduce the growth rate and have a strong impact on structure formation in both degenerate and non-degenerate circumstances. Numerical estimation on the basis of observed parameters for the degenerate and non-degenerate cases is presented to show the effects of dust-neutral collisions and dust effective velocity in the presence of polarization force. The values of Jeans length and Jeans mass have been estimated for degenerate white dwarfs as Jeans length L J = 1.3 × 10 5 cm and Jeans mass M J = 0.75 × 10 - 3 M⊙ and for non-degenerate laboratory plasma Jeans length L J = 6.86 × 10 16 cm and Jeans mass M J = 0.68 × 10 10 M⊙. The stability of the SCDP system is discussed using the Routh-Hurwitz criterion.
Nonlinear stochastic system identification of skin using volterra kernels.
Chen, Yi; Hunter, Ian W
2013-04-01
Volterra kernel stochastic system identification is a technique that can be used to capture and model nonlinear dynamics in biological systems, including the nonlinear properties of skin during indentation. A high bandwidth and high stroke Lorentz force linear actuator system was developed and used to test the mechanical properties of bulk skin and underlying tissue in vivo using a non-white input force and measuring an output position. These short tests (5 s) were conducted in an indentation configuration normal to the skin surface and in an extension configuration tangent to the skin surface. Volterra kernel solution methods were used including a fast least squares procedure and an orthogonalization solution method. The practical modifications, such as frequency domain filtering, necessary for working with low-pass filtered inputs are also described. A simple linear stochastic system identification technique had a variance accounted for (VAF) of less than 75%. Representations using the first and second Volterra kernels had a much higher VAF (90-97%) as well as a lower Akaike information criteria (AICc) indicating that the Volterra kernel models were more efficient. The experimental second Volterra kernel matches well with results from a dynamic-parameter nonlinearity model with fixed mass as a function of depth as well as stiffness and damping that increase with depth into the skin. A study with 16 subjects showed that the kernel peak values have mean coefficients of variation (CV) that ranged from 3 to 8% and showed that the kernel principal components were correlated with location on the body, subject mass, body mass index (BMI), and gender. These fast and robust methods for Volterra kernel stochastic system identification can be applied to the characterization of biological tissues, diagnosis of skin diseases, and determination of consumer product efficacy.
Fixed kernel regression for voltammogram feature extraction
Acevedo Rodriguez, F. J.; López-Sastre, R. J.; Gil-Jiménez, P.; Ruiz-Reyes, N.; Maldonado Bascón, S.
2009-12-01
Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals.
Mixture Density Mercer Kernels: A Method to Learn Kernels
National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels,...
(Pre)kernel catchers for cooperative games
Chang, Chih; Driessen, Theo
1995-01-01
The paper provides a new (pre)kernel catcher in that the relevant set always contains the (pre)kernel. This new (pre)kernel catcher gives rise to a better lower bound ɛ*** such that the kernel is included in strong ɛ-cores for all real numbers ɛ not smaller than the relevant bound ɛ***.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off....
Kernel CMAC: an Efficient Neural Network for Classification and Regression
Directory of Open Access Journals (Sweden)
Gábor Horváth
2006-01-01
Full Text Available Kernel methods in learning machines have been developed in the last decade asnew techniques for solving classification and regression problems. Kernel methods havemany advantageous properties regarding their learning and generalization capabilities,but for getting the solution usually the computationally complex quadratic programming isrequired. To reduce computational complexity a lot of different versions have beendeveloped. These versions apply different kernel functions, utilize the training data indifferent ways or apply different criterion functions. This paper deals with a special kernelnetwork, which is based on the CMAC neural network. Cerebellar Model ArticulationController (CMAC has some attractive features: fast learning capability and thepossibility of efficient digital hardware implementation. Besides these attractive featuresthe modelling and generalization capabilities of a CMAC may be rather limited. The papershows that kernel CMAC – an extended version of the classical CMAC networkimplemented in a kernel form – improves that properties of the classical versionsignificantly. Both the modelling and the generalization capabilities are improved while thelimited computational complexity is maintained. The paper shows the architecture of thisnetwork and presents the relation between the classical CMAC and the kernel networks.The operation of the proposed architecture is illustrated using some common benchmarkproblems.
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
. Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
Pope, Benjamin; Hinkley, Sasha; Ireland, Michael J; Greenbaum, Alexandra; Latyshev, Alexey; Monnier, John D; Martinache, Frantz
2015-01-01
At present, the principal limitation on the resolution and contrast of astronomical imaging instruments comes from aberrations in the optical path, which may be imposed by the Earth's turbulent atmosphere or by variations in the alignment and shape of the telescope optics. These errors can be corrected physically, with active and adaptive optics, and in post-processing of the resulting image. A recently-developed adaptive optics post-processing technique, called kernel phase interferometry, uses linear combinations of phases that are self-calibrating with respect to small errors, with the goal of constructing observables that are robust against the residual optical aberrations in otherwise well-corrected imaging systems. Here we present a direct comparison between kernel phase and the more established competing techniques, aperture masking interferometry, point spread function (PSF) fitting and bispectral analysis. We resolve the alpha Ophiuchi binary system near periastron, using the Palomar 200-Inch Telesco...
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
The methods for computing the kemel consistency-based diagnoses and the kernel abductive diagnoses are only suited for the situation where part of the fault behavioral modes of the components are known. The characterization of the kernel model-based diagnosis based on the general causal theory is proposed, which can break through the limitation of the above methods when all behavioral modes of each component are known. Using this method, when observation subsets deduced logically are respectively assigned to the empty or the whole observation set, the kernel consistency-based diagnoses and the kernel abductive diagnoses can deal with all situations. The direct relationship between this diagnostic procedure and the prime implicants/implicates is proved, thus linking theoretical result with implementation.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole E.
The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....
Kernel Rootkits Implement and Detection
Institute of Scientific and Technical Information of China (English)
LI Xianghe; ZHANG Liancheng; LI Shuo
2006-01-01
Rootkits, which unnoticeably reside in your computer, stealthily carry on remote control and software eavesdropping, are a great threat to network and computer security. It' time to acquaint ourselves with their implement and detection. This article pays more attention to kernel rootkits, because they are more difficult to compose and to be identified than useland rootkits. The latest technologies used to write and detect kernel rootkits, along with their advantages and disadvantages, are present in this article.
Employment of kernel methods on wind turbine power performance assessment
DEFF Research Database (Denmark)
Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.
2015-01-01
A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...
Degenerate Euler zeta function
Kim, Taekyun
2015-01-01
Recently, T. Kim considered Euler zeta function which interpolates Euler polynomials at negative integer (see [3]). In this paper, we study degenerate Euler zeta function which is holomorphic function on complex s-plane associated with degenerate Euler polynomials at negative integers.
Local coding based matching kernel method for image classification.
Directory of Open Access Journals (Sweden)
Yan Song
Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain
Hannagan, Thomas; Grainger, Jonathan
2012-01-01
It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jakel, Scholkopf, & Wichmann, 2009). We point out that "String kernels," initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal…
Kernel versions of some orthogonal transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Model Selection in Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter
Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...
Degenerate primer design for highly variable genomes.
Li, Kelvin; Shrivastava, Susmita; Stockwell, Timothy B
2015-01-01
The application of degenerate PCR primers towards target amplification and sequencing is a useful technique when a population of organisms under investigation is evolving rapidly, or is highly diverse. Degenerate bases in these primers are specified with ambiguity codes that represent alternative nucleotide configurations. Degenerate PCR primers allow the simultaneous amplification of a heterogeneous population by providing a mixture of PCR primers each of which anneal to an alternative genotype found in the isolated sample. However, as the number of degenerate bases specified in a pair of primers rises, the likelihood of amplifying unwanted alternative products also increases. These alternative products may confound downstream data analyses if their levels begin to obfuscate the desired PCR products. This chapter describes a set of computational methodologies that may be used to minimize the degeneracy of designed primers, while still maximizing the proportion of genotypes assayed in the targeted population.
Integral equations with contrasting kernels
Directory of Open Access Journals (Sweden)
Theodore Burton
2008-01-01
Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.
Model selection for Gaussian kernel PCA denoising
DEFF Research Database (Denmark)
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
Kernel learning algorithms for face recognition
Li, Jun-Bao; Pan, Jeng-Shyang
2013-01-01
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Disc degeneration: current surgical options
Directory of Open Access Journals (Sweden)
C Schizas
2010-10-01
Full Text Available Chronic low back pain attributed to lumbar disc degeneration poses a serious challenge to physicians. Surgery may be indicated in selected cases following failure of appropriate conservative treatment. For decades, the only surgical option has been spinal fusion, but its results have been inconsistent. Some prospective trials show superiority over usual conservative measures while others fail to demonstrate its advantages. In an effort to improve results of fusion and to decrease the incidence of adjacent segment degeneration, total disc replacement techniques have been introduced and studied extensively. Short-term results have shown superiority over some fusion techniques. Mid-term results however tend to show that this approach yields results equivalent to those of spinal fusion. Nucleus replacement has gained some popularity initially, but evidence on its efficacy is scarce. Dynamic stabilisation, a technique involving less rigid implants than in spinal fusion and performed without the need for bone grafting, represents another surgical option. Evidence again is lacking on its superiority over other surgical strategies and conservative measures. Insertion of interspinous devices posteriorly, aiming at redistributing loads and relieving pain, has been used as an adjunct to disc removal surgery for disc herniation. To date however, there is no clear evidence on their efficacy. Minimally invasive intradiscal thermocoagulation techniques have also been tried, but evidence of their effectiveness is questioned. Surgery using novel biological solutions may be the future of discogenic pain treatment. Collaboration between clinicians and basic scientists in this multidisciplinary field will undoubtedly shape the future of treating symptomatic disc degeneration.
A trace ratio maximization approach to multiple kernel-based dimensionality reduction.
Jiang, Wenhao; Chung, Fu-lai
2014-01-01
Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings.
Corruption clubs: empirical evidence from kernel density estimates
Herzfeld, T.; Weiss, Ch.
2007-01-01
A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to
Corruption clubs: empirical evidence from kernel density estimates
Herzfeld, T.; Weiss, Ch.
2007-01-01
A common finding of many analytical models is the existence of multiple equilibria of corruption. Countries characterized by the same economic, social and cultural background do not necessarily experience the same levels of corruption. In this article, we use Kernel Density Estimation techniques to
DEFF Research Database (Denmark)
Walder, Christian; Henao, Ricardo; Mørup, Morten
We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....
Congruence Kernels of Orthoimplication Algebras
Directory of Open Access Journals (Sweden)
I. Chajda
2007-10-01
Full Text Available Abstracting from certain properties of the implication operation in Boolean algebras leads to so-called orthoimplication algebras. These are in a natural one-to-one correspondence with families of compatible orthomodular lattices. It is proved that congruence kernels of orthoimplication algebras are in a natural one-to-one correspondence with families of compatible p-filters on the corresponding orthomodular lattices. Finally, it is proved that the lattice of all congruence kernels of an orthoimplication algebra is relatively pseudocomplemented and a simple description of the relative pseudocomplement is given.
Institute of Scientific and Technical Information of China (English)
范成礼; 邢清华; 付强; 范学渊
2013-01-01
A kernel based intuitionistic fuzzy clustering algorithm named IFKCM is proposed on the basis of analyzing the deficiency of the existing clustering algorithm.Gauss kernel is introduced,the constraint condition is improved and the property superiority of dynamic clustering performance is used,which is also the superiority in intuitionistic fuzzy c-means (IFCM) algorithm.Then the experimental result proves its effectiveness.Subsequently,according to the requirement of target recognition in ballistic midcourse simulation system and the character of the ballistic target recognition,the simulation system named intuitionistic fuzzy kernel c-means-target recognition in ballistic midcourse (IFKCM-TRBM) is designed and realized.The results of simulation show that the system is reliable and can support the research of target recognition in ballistic midcourse.%针对现有的模糊核聚类算法性能的问题,汲取直觉模糊c-均值聚类(intuitionistic fuzzy c-means,IFCM)算法的动态聚类特性优势,引入高斯核函数,改良归一化条件,提出直觉模糊核c-均值聚类(intuitionistic fuzzy kernel c-means,IFKCM)算法,并通过实际数据测试,证实了该算法的可行性和有效性.最后,根据弹道中段目标识别仿真系统的要求及弹道目标识别.的特点,设计并实现了基于直觉模糊核c-均值聚类的弹道中段目标识别(intuitionistic fuzzy kernel c-means-target recognition in ballistic midcourse,IFKCM-TRBM)原型系统,仿真实验及对比分析充分表明该原型系统的稳健可行性,为弹道中段目标识别提出了一种新的参考和尝试.
A Novel Kernel for Least Squares Support Vector Machine
Institute of Scientific and Technical Information of China (English)
FENG Wei; ZHAO Yong-ping; DU Zhong-hua; LI De-cai; WANG Li-feng
2012-01-01
Extreme learning machine(ELM) has attracted much attention in recent years due to its fast convergence and good performance.Merging both ELM and support vector machine is an important trend,thus yielding an ELM kernel.ELM kernel based methods are able to solve the nonlinear problems by inducing an explicit mapping compared with the commonly-used kernels such as Gaussian kernel.In this paper,the ELM kernel is extended to the least squares support vector regression(LSSVR),so ELM-LSSVR was proposed.ELM-LSSVR can be used to reduce the training and test time simultaneously without extra techniques such as sequential minimal optimization and pruning mechanism.Moreover,the memory space for the training and test was relieved.To confirm the efficacy and feasibility of the proposed ELM-LSSVR,the experiments are reported to demonstrate that ELM-LSSVR takes the advantage of training and test time with comparable accuracy to other algorithms.
Kernel Methods for Mining Instance Data in Ontologies
Bloehdorn, Stephan; Sure, York
The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.
Bergman kernel on generalized exceptional Hua domain
Institute of Scientific and Technical Information of China (English)
YIN; weipng(殷慰萍); ZHAO; zhengang(赵振刚)
2002-01-01
We have computed the Bergman kernel functions explicitly for two types of generalized exceptional Hua domains, and also studied the asymptotic behavior of the Bergman kernel function of exceptional Hua domain near boundary points, based on Appell's multivariable hypergeometric function.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing.
Li, Shuang; Liu, Bing; Zhang, Chen
2016-01-01
Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
Regularized Embedded Multiple Kernel Dimensionality Reduction for Mine Signal Processing
Directory of Open Access Journals (Sweden)
Shuang Li
2016-01-01
Full Text Available Traditional multiple kernel dimensionality reduction models are generally based on graph embedding and manifold assumption. But such assumption might be invalid for some high-dimensional or sparse data due to the curse of dimensionality, which has a negative influence on the performance of multiple kernel learning. In addition, some models might be ill-posed if the rank of matrices in their objective functions was not high enough. To address these issues, we extend the traditional graph embedding framework and propose a novel regularized embedded multiple kernel dimensionality reduction method. Different from the conventional convex relaxation technique, the proposed algorithm directly takes advantage of a binary search and an alternative optimization scheme to obtain optimal solutions efficiently. The experimental results demonstrate the effectiveness of the proposed method for supervised, unsupervised, and semisupervised scenarios.
A kernel version of multivariate alteration detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2013-01-01
Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....
Random Feature Maps for Dot Product Kernels
Kar, Purushottam; Karnick, Harish
2012-01-01
Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explic...
On the Diamond Bessel Heat Kernel
Directory of Open Access Journals (Sweden)
Wanchak Satsanit
2011-01-01
Full Text Available We study the heat equation in n dimensional by Diamond Bessel operator. We find the solution by method of convolution and Fourier transform in distribution theory and also obtain an interesting kernel related to the spectrum and the kernel which is called Bessel heat kernel.
Local Observed-Score Kernel Equating
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Computations of Bergman Kernels on Hua Domains
Institute of Scientific and Technical Information of China (English)
殷慰萍; 王安; 赵振刚; 赵晓霞; 管冰辛
2001-01-01
@@The Bergman kernel function plays an important ro1e in several complex variables.There exists the Bergman kernel function on any bounded domain in Cn. But we can get the Bergman kernel functions in explicit formulas for a few types of domains only,for example:the bounded homogeneous domains and the egg domain in some cases.
Veto-Consensus Multiple Kernel Learning
Y. Zhou; N. Hu; C.J. Spanos
2016-01-01
We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The propose
Accelerating the Original Profile Kernel.
Directory of Open Access Journals (Sweden)
Tobias Hamp
Full Text Available One of the most accurate multi-class protein classification systems continues to be the profile-based SVM kernel introduced by the Leslie group. Unfortunately, its CPU requirements render it too slow for practical applications of large-scale classification tasks. Here, we introduce several software improvements that enable significant acceleration. Using various non-redundant data sets, we demonstrate that our new implementation reaches a maximal speed-up as high as 14-fold for calculating the same kernel matrix. Some predictions are over 200 times faster and render the kernel as possibly the top contender in a low ratio of speed/performance. Additionally, we explain how to parallelize various computations and provide an integrative program that reduces creating a production-quality classifier to a single program call. The new implementation is available as a Debian package under a free academic license and does not depend on commercial software. For non-Debian based distributions, the source package ships with a traditional Makefile-based installer. Download and installation instructions can be found at https://rostlab.org/owiki/index.php/Fast_Profile_Kernel. Bugs and other issues may be reported at https://rostlab.org/bugzilla3/enter_bug.cgi?product=fastprofkernel.
Adaptive wiener image restoration kernel
Yuan, Ding
2007-06-05
A method and device for restoration of electro-optical image data using an adaptive Wiener filter begins with constructing imaging system Optical Transfer Function, and the Fourier Transformations of the noise and the image. A spatial representation of the imaged object is restored by spatial convolution of the image using a Wiener restoration kernel.
Directory of Open Access Journals (Sweden)
Senyue Zhang
2016-01-01
Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.
Random Feature Maps for Dot Product Kernels
Kar, Purushottam
2012-01-01
Approximating non-linear kernels using feature maps has gained a lot of interest in recent years due to applications in reducing training and testing times of SVM classifiers and other kernel based learning algorithms. We extend this line of work and present low distortion embeddings for dot product kernels into linear Euclidean spaces. We base our results on a classical result in harmonic analysis characterizing all dot product kernels and use it to define randomized feature maps into explicit low dimensional Euclidean spaces in which the native dot product provides an approximation to the dot product kernel with high confidence.
Degenerate Density Perturbation Theory
Palenik, Mark C
2016-01-01
Fractional occupation numbers can be used in density functional theory to create a symmetric Kohn-Sham potential, resulting in orbitals with degenerate eigenvalues. We develop the corresponding perturbation theory and apply it to a system of $N_d$ degenerate electrons in a harmonic oscillator potential. The order-by-order expansions of both the fractional occupation numbers and unitary transformations within the degenerate subspace are determined by the requirement that a differentiable map exists connecting the initial and perturbed states. Using the X$\\alpha$ exchange-correlation (XC) functional, we find an analytic solution for the first-order density and first through third-order energies as a function of $\\alpha$, with and without a self-interaction correction. The fact that the XC Hessian is not positive definite plays an important role in the behavior of the occupation numbers.
Degenerate density perturbation theory
Palenik, Mark C.; Dunlap, Brett I.
2016-09-01
Fractional occupation numbers can be used in density functional theory to create a symmetric Kohn-Sham potential, resulting in orbitals with degenerate eigenvalues. We develop the corresponding perturbation theory and apply it to a system of Nd degenerate electrons in a harmonic oscillator potential. The order-by-order expansions of both the fractional occupation numbers and unitary transformations within the degenerate subspace are determined by the requirement that a differentiable map exists connecting the initial and perturbed states. Using the X α exchange-correlation (XC) functional, we find an analytic solution for the first-order density and first- through third-order energies as a function of α , with and without a self-interaction correction. The fact that the XC Hessian is not positive definite plays an important role in the behavior of the occupation numbers.
Testing Infrastructure for Operating System Kernel Development
DEFF Research Database (Denmark)
Walter, Maxwell; Karlsson, Sven
2014-01-01
Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....
Speech Enhancement Using Kernel and Normalized Kernel Affine Projection Algorithm
Directory of Open Access Journals (Sweden)
Bolimera Ravi
2013-08-01
Full Text Available The goal of this paper is to investigate the speech signal enhancement using Kernel Affine ProjectionAlgorithm (KAPA and Normalized KAPA. The removal of background noise is very important in manyapplications like speech recognition, telephone conversations, hearing aids, forensic, etc. Kernel adaptivefilters shown good performance for removal of noise. If the evaluation of background noise is more slowlythan the speech, i.e., noise signal is more stationary than the speech, we can easily estimate the noiseduring the pauses in speech. Otherwise it is more difficult to estimate the noise which results indegradation of speech. In order to improve the quality and intelligibility of speech, unlike time andfrequency domains, we can process the signal in new domain like Reproducing Kernel Hilbert Space(RKHS for high dimensional to yield more powerful nonlinear extensions. For experiments, we have usedthe database of noisy speech corpus (NOIZEUS. From the results, we observed the removal noise in RKHShas great performance in signal to noise ratio values in comparison with conventional adaptive filters.
Online multiple kernel similarity learning for visual search.
Xia, Hao; Hoi, Steven C H; Jin, Rong; Zhao, Peilin
2014-03-01
Recent years have witnessed a number of studies on distance metric learning to improve visual similarity search in content-based image retrieval (CBIR). Despite their successes, most existing methods on distance metric learning are limited in two aspects. First, they usually assume the target proximity function follows the family of Mahalanobis distances, which limits their capacity of measuring similarity of complex patterns in real applications. Second, they often cannot effectively handle the similarity measure of multimodal data that may originate from multiple resources. To overcome these limitations, this paper investigates an online kernel similarity learning framework for learning kernel-based proximity functions which goes beyond the conventional linear distance metric learning approaches. Based on the framework, we propose a novel online multiple kernel similarity (OMKS) learning method which learns a flexible nonlinear proximity function with multiple kernels to improve visual similarity search in CBIR. We evaluate the proposed technique for CBIR on a variety of image data sets in which encouraging results show that OMKS outperforms the state-of-the-art techniques significantly.
Regularized Kernel Forms of Minimum Squared Error Method
Institute of Scientific and Technical Information of China (English)
XU Jian-hua; ZHANG Xue-gong; LI Yan-da
2006-01-01
Minimum squared error (MSE) algorithm is one of the classical pattern recognition and regression analysis methods,whose objective is to minimize the squared error summation between the output of linear function and the desired output.In this paper,the MSE algorithm is modified by using kernel functions satisfying the Mercer condition and regularization technique; and the nonlinear MSE algorithms based on kernels and regularization term,that is,the regularized kernel forms of MSE algorithm,are proposed.Their objective functions include the squared error summation between the output of nonlinear function based on kernels and the desired output and a proper regularization term.The regularization technique can handle ill-posed problems,reduce the solution space,and control the generalization.Three squared regularization terms are utilized in this paper.In accordance with the probabilistic interpretation of regularization terms,the difference among three regularization terms is given in detail.The synthetic and real data are used to analyze the algorithm performance.
Vortices as degenerate metrics
Baptista, J M
2012-01-01
We note that the Bogomolny equation for abelian vortices is precisely the condition for invariance of the Hermitian-Einstein equation under a degenerate conformal transformation. This leads to a natural interpretation of vortices as degenerate hermitian metrics that satisfy a certain curvature equation. Using this viewpoint, we rephrase standard results about vortices and make some new observations. We note the existence of a conceptually simple, non-linear rule for superposing vortex solutions, and we describe the natural behaviour of the L^2-metric on the moduli space upon certain restrictions.
Kraepelin and degeneration theory.
Hoff, Paul
2008-06-01
Emil Kraepelin's contribution to the clinical and scientific field of psychiatry is recognized world-wide. In recent years, however, there have been a number of critical remarks on his acceptance of degeneration theory in particular and on his political opinion in general, which was said to have carried "overtones of proto-fascism" by Michael Shepherd [28]. The present paper discusses the theoretical cornerstones of Kraepelinian psychiatry with regard to their relevance for Kraepelin's attitude towards degeneration theory. This theory had gained wide influence not only in scientific, but also in philosophical and political circles in the last decades of the nineteenth century. There is no doubt that Kraepelin, on the one hand, accepted and implemented degeneration theory into the debate on etiology and pathogenesis of mental disorders. On the other hand, it is not appropriate to draw a simple and direct line from early versions of degeneration theory to the crimes of psychiatrists and politicians during the rule of national socialism. What we need, is a differentiated view, since this will be the only scientific one. Much research needs to be done here in the future, and such research will surely have a significant impact not only on the historical field, but also on the continuous debate about psychiatry, neuroscience and neurophilosophy.
X-82 to Treat Age-related Macular Degeneration
2017-01-12
Age-Related Macular Degeneration (AMD); Macular Degeneration; Exudative Age-related Macular Degeneration; AMD; Macular Degeneration, Age-related, 10; Eye Diseases; Retinal Degeneration; Retinal Diseases
Online Learning of Noisy Data with Kernels
Cesa-Bianchi, Nicolò; Shamir, Ohad
2010-01-01
We study online learning when individual instances are corrupted by random noise. We assume the noise distribution is unknown, and may change over time with no restriction other than having zero mean and bounded variance. Our technique relies on a family of unbiased estimators for non-linear functions, which may be of independent interest. We show that a variant of online gradient descent can learn functions in any dot-product (e.g., polynomial) or Gaussian kernel space with any analytic convex loss function. Our variant uses randomized estimates that need to query a random number of noisy copies of each instance, where with high probability this number is upper bounded by a constant. Allowing such multiple queries cannot be avoided: Indeed, we show that online learning is in general impossible when only one noisy copy of each instance can be accessed.
Optimal O(1 Bilateral Filter with Arbitrary Spatial and Range Kernels Using Sparse Approximation
Directory of Open Access Journals (Sweden)
Shengdong Pan
2014-01-01
Full Text Available A number of acceleration schemes for speeding up the time-consuming bilateral filter have been proposed in the literature. Among these techniques, the histogram-based bilateral filter trades the flexibility for achieving O(1 computational complexity using box spatial kernel. A recent study shows that this technique can be leveraged for O(1 bilateral filter with arbitrary spatial and range kernels by linearly combining the results of multiple-box bilateral filters. However, this method requires many box bilateral filters to obtain sufficient accuracy when approximating the bilateral filter with a large spatial kernel. In this paper, we propose approximating arbitrary spatial kernel using a fixed number of boxes. It turns out that the multiple-box spatial kernel can be applied in many O(1 acceleration schemes in addition to the histogram-based one. Experiments on the application to the histogram-based acceleration are presented in this paper. Results show that the proposed method has better accuracy in approximating the bilateral filter with Gaussian spatial kernel, compared with the previous histogram-based methods. Furthermore, the performance of the proposed histogram-based bilateral filter is robust with respect to the parameters of the filter kernel.
Eckhard, Timo; Valero, Eva M; Hernández-Andrés, Javier; Heikkinen, Ville
2014-03-01
In this work, we evaluate the conditionally positive definite logarithmic kernel in kernel-based estimation of reflectance spectra. Reflectance spectra are estimated from responses of a 12-channel multispectral imaging system. We demonstrate the performance of the logarithmic kernel in comparison with the linear and Gaussian kernel using simulated and measured camera responses for the Pantone and HKS color charts. Especially, we focus on the estimation model evaluations in case the selection of model parameters is optimized using a cross-validation technique. In experiments, it was found that the Gaussian and logarithmic kernel outperformed the linear kernel in almost all evaluation cases (training set size, response channel number) for both sets. Furthermore, the spectral and color estimation accuracies of the Gaussian and logarithmic kernel were found to be similar in several evaluation cases for real and simulated responses. However, results suggest that for a relatively small training set size, the accuracy of the logarithmic kernel can be markedly lower when compared to the Gaussian kernel. Further it was found from our data that the parameter of the logarithmic kernel could be fixed, which simplified the use of this kernel when compared with the Gaussian kernel.
Directory of Open Access Journals (Sweden)
Banan Maayah
2014-01-01
Full Text Available A new algorithm called multistep reproducing kernel Hilbert space method is represented to solve nonlinear oscillator’s models. The proposed scheme is a modification of the reproducing kernel Hilbert space method, which will increase the intervals of convergence for the series solution. The numerical results demonstrate the validity and the applicability of the new technique. A very good agreement was found between the results obtained using the presented algorithm and the Runge-Kutta method, which shows that the multistep reproducing kernel Hilbert space method is very efficient and convenient for solving nonlinear oscillator’s models.
Kernel-Based Least Squares Temporal Difference With Gradient Correction.
Song, Tianheng; Li, Dazi; Cao, Liulin; Hirasawa, Kotaro
2016-04-01
A least squares temporal difference with gradient correction (LS-TDC) algorithm and its kernel-based version kernel-based LS-TDC (KLS-TDC) are proposed as policy evaluation algorithms for reinforcement learning (RL). LS-TDC is derived from the TDC algorithm. Attributed to TDC derived by minimizing the mean-square projected Bellman error, LS-TDC has better convergence performance. The least squares technique is used to omit the size-step tuning of the original TDC and enhance robustness. For KLS-TDC, since the kernel method is used, feature vectors can be selected automatically. The approximate linear dependence analysis is performed to realize kernel sparsification. In addition, a policy iteration strategy motivated by KLS-TDC is constructed to solve control learning problems. The convergence and parameter sensitivities of both LS-TDC and KLS-TDC are tested through on-policy learning, off-policy learning, and control learning problems. Experimental results, as compared with a series of corresponding RL algorithms, demonstrate that both LS-TDC and KLS-TDC have better approximation and convergence performance, higher efficiency for sample usage, smaller burden of parameter tuning, and less sensitivity to parameters.
Tensorial Kernel Principal Component Analysis for Action Recognition
Directory of Open Access Journals (Sweden)
Cong Liu
2013-01-01
Full Text Available We propose the Tensorial Kernel Principal Component Analysis (TKPCA for dimensionality reduction and feature extraction from tensor objects, which extends the conventional Principal Component Analysis (PCA in two perspectives: working directly with multidimensional data (tensors in their native state and generalizing an existing linear technique to its nonlinear version by applying the kernel trick. Our method aims to remedy the shortcomings of multilinear subspace learning (tensorial PCA developed recently in modelling the nonlinear manifold of tensor objects and brings together the desirable properties of kernel methods and tensor decompositions for significant performance gain when the data are multidimensional and nonlinear dependencies do exist. Our approach begins by formulating TKPCA as an optimization problem. Then, we develop a kernel function based on Grassmann Manifold that can directly take tensorial representation as parameters instead of traditional vectorized representation. Furthermore, a TKPCA-based tensor object recognition is also proposed for application of the action recognition. Experiments with real action datasets show that the proposed method is insensitive to both noise and occlusion and performs well compared with state-of-the-art algorithms.
Theory of reproducing kernels and applications
Saitoh, Saburou
2016-01-01
This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...
Filters, reproducing kernel, and adaptive meshfree method
You, Y.; Chen, J.-S.; Lu, H.
Reproducing kernel, with its intrinsic feature of moving averaging, can be utilized as a low-pass filter with scale decomposition capability. The discrete convolution of two nth order reproducing kernels with arbitrary support size in each kernel results in a filtered reproducing kernel function that has the same reproducing order. This property is utilized to separate the numerical solution into an unfiltered lower order portion and a filtered higher order portion. As such, the corresponding high-pass filter of this reproducing kernel filter can be used to identify the locations of high gradient, and consequently serves as an operator for error indication in meshfree analysis. In conjunction with the naturally conforming property of the reproducing kernel approximation, a meshfree adaptivity method is also proposed.
Prospectives for gene therapy of retinal degenerations.
Thumann, Gabriele
2012-08-01
growth factors (VEGF), have been used to prevent the neovascularization that accompanies AMD and DR resulting in the amelioration of vision in a significant number of patients. In animal models it has been shown that transfection of RPE cells with the gene for PEDF and other growth factors can prevent or slow degeneration. A limited number of studies in humans have also shown that transfection of RPE cells in vivo with the gene for PEDF is effective in preventing degeneration and restore vision. Most of these studies have used virally mediated gene delivery with all its accompanying side effects and have not been widely used. New techniques using non-viral protocols that allow efficient delivery and permanent integration of the transgene into the host cell genome offer novel opportunities for effective treatment of retinal degenerations.
Kernel principal component analysis for change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Morton, J.C.
2008-01-01
region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....
Tame Kernels of Pure Cubic Fields
Institute of Scientific and Technical Information of China (English)
Xiao Yun CHENG
2012-01-01
In this paper,we study the p-rank of the tame kernels of pure cubic fields.In particular,we prove that for a fixed positive integer m,there exist infinitely many pure cubic fields whose 3-rank of the tame kernel equal to m.As an application,we determine the 3-rank of their tame kernels for some special pure cubic fields.
Kernel Factor Analysis Algorithm with Varimax
Institute of Scientific and Technical Information of China (English)
Xia Guoen; Jin Weidong; Zhang Gexiang
2006-01-01
Kernal factor analysis (KFA) with varimax was proposed by using Mercer kernel function which can map the data in the original space to a high-dimensional feature space, and was compared with the kernel principle component analysis (KPCA). The results show that the best error rate in handwritten digit recognition by kernel factor analysis with varimax (4.2%) was superior to KPCA (4.4%). The KFA with varimax could more accurately image handwritten digit recognition.
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-02-12
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System
Directory of Open Access Journals (Sweden)
Chunmei Liu
2016-01-01
Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.
Laenderyggens degeneration og radiologi
DEFF Research Database (Denmark)
Jacobsen, Steffen; Gosvig, Kasper Kjaerulf; Sonne-Holm, Stig
2006-01-01
Low back pain (LBP) is one of the most common conditions, and at the same time one of the most complex nosological entities. The lifetime prevalence is approximately 80%, and radiological features of lumbar degeneration are almost universal in adults. The individual risk factors for LBP and signi......Low back pain (LBP) is one of the most common conditions, and at the same time one of the most complex nosological entities. The lifetime prevalence is approximately 80%, and radiological features of lumbar degeneration are almost universal in adults. The individual risk factors for LBP...... and significant relationships between radiological findings and subjective symptoms have both been notoriously difficult to identify. The lack of consensus on clinical criteria and radiological definitions has hampered the undertaking of properly executed epidemiological studies. The natural history of LBP...
Energy Technology Data Exchange (ETDEWEB)
Micheli, Fiorenza de [Centro de Estudios Cientificos, Arturo Prat 514, Valdivia (Chile); Instituto de Fisica, Pontificia Universidad Catolica de Valparaiso, Casilla 4059, Valparaiso (Chile); Zanelli, Jorge [Centro de Estudios Cientificos, Arturo Prat 514, Valdivia (Chile); Universidad Andres Bello, Av. Republica 440, Santiago (Chile)
2012-10-15
A degenerate dynamical system is characterized by a symplectic structure whose rank is not constant throughout phase space. Its phase space is divided into causally disconnected, nonoverlapping regions in each of which the rank of the symplectic matrix is constant, and there are no classical orbits connecting two different regions. Here the question of whether this classical disconnectedness survives quantization is addressed. Our conclusion is that in irreducible degenerate systems-in which the degeneracy cannot be eliminated by redefining variables in the action-the disconnectedness is maintained in the quantum theory: there is no quantum tunnelling across degeneracy surfaces. This shows that the degeneracy surfaces are boundaries separating distinct physical systems, not only classically, but in the quantum realm as well. The relevance of this feature for gravitation and Chern-Simons theories in higher dimensions cannot be overstated.
Cataracts and macular degeneration.
Shoch, D
1979-09-01
The intraocular lens restores general vision and some degree of independence and mobility to patients with dense cataracts and macular degeneration. The patient, however, must be repeatedly warned that fine central vision, particularly reading, will not be possible after the surgery. An aphakic spectacle leaves such patients a narrow band of vision when superimposed over the macular lesion, and contact lenses are too small for the patient to manage insertion without help.
Reproducing Kernel Method for Fractional Riccati Differential Equations
Directory of Open Access Journals (Sweden)
X. Y. Li
2014-01-01
Full Text Available This paper is devoted to a new numerical method for fractional Riccati differential equations. The method combines the reproducing kernel method and the quasilinearization technique. Its main advantage is that it can produce good approximations in a larger interval, rather than a local vicinity of the initial position. Numerical results are compared with some existing methods to show the accuracy and effectiveness of the present method.
Density and hazard rate estimation for censored and a-mixing data using gamma kernels
2006-01-01
In this paper we consider the nonparametric estimation for a density and hazard rate function for right censored -mixing survival time data using kernel smoothing techniques. Since survival times are positive with potentially a high concentration at zero, one has to take into account the bias problems when the functions are estimated in the boundary region. In this paper, gamma kernel estimators of the density and the hazard rate function are proposed. The estimators use adaptive weights depe...
Molecular hydrodynamics from memory kernels
Lesnicki, Dominika; Carof, Antoine; Rotenberg, Benjamin
2016-01-01
The memory kernel for a tagged particle in a fluid, computed from molecular dynamics simulations, decays algebraically as $t^{-3/2}$. We show how the hydrodynamic Basset-Boussinesq force naturally emerges from this long-time tail and generalize the concept of hydrodynamic added mass. This mass term is negative in the present case of a molecular solute, at odds with incompressible hydrodynamics predictions. We finally discuss the various contributions to the friction, the associated time scales and the cross-over between the molecular and hydrodynamic regimes upon increasing the solute radius.
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen
2014-09-01
For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance.
Yu, Yinan; Diamantaras, Konstantinos I; McKelvey, Tomas; Kung, Sun-Yuan
2016-12-07
In kernel-based classification models, given limited computational power and storage capacity, operations over the full kernel matrix becomes prohibitive. In this paper, we propose a new supervised learning framework using kernel models for sequential data processing. The framework is based on two components that both aim at enhancing the classification capability with a subset selection scheme. The first part is a subspace projection technique in the reproducing kernel Hilbert space using a CLAss-specific Subspace Kernel representation for kernel approximation. In the second part, we propose a novel structural risk minimization algorithm called the adaptive margin slack minimization to iteratively improve the classification accuracy by an adaptive data selection. We motivate each part separately, and then integrate them into learning frameworks for large scale data. We propose two such frameworks: the memory efficient sequential processing for sequential data processing and the parallelized sequential processing for distributed computing with sequential data acquisition. We test our methods on several benchmark data sets and compared with the state-of-the-art techniques to verify the validity of the proposed techniques.
Differentiable Kernels in Generalized Matrix Learning Vector Quantization
Kästner, M.; Nebel, D.; Riedel, M.; Biehl, M.; Villmann, T.
2013-01-01
In the present paper we investigate the application of differentiable kernel for generalized matrix learning vector quantization as an alternative kernel-based classifier, which additionally provides classification dependent data visualization. We show that the concept of differentiable kernels allo
Kernel current source density method.
Potworowski, Jan; Jakuczun, Wit; Lȩski, Szymon; Wójcik, Daniel
2012-02-01
Local field potentials (LFP), the low-frequency part of extracellular electrical recordings, are a measure of the neural activity reflecting dendritic processing of synaptic inputs to neuronal populations. To localize synaptic dynamics, it is convenient, whenever possible, to estimate the density of transmembrane current sources (CSD) generating the LFP. In this work, we propose a new framework, the kernel current source density method (kCSD), for nonparametric estimation of CSD from LFP recorded from arbitrarily distributed electrodes using kernel methods. We test specific implementations of this framework on model data measured with one-, two-, and three-dimensional multielectrode setups. We compare these methods with the traditional approach through numerical approximation of the Laplacian and with the recently developed inverse current source density methods (iCSD). We show that iCSD is a special case of kCSD. The proposed method opens up new experimental possibilities for CSD analysis from existing or new recordings on arbitrarily distributed electrodes (not necessarily on a grid), which can be obtained in extracellular recordings of single unit activity with multiple electrodes.
Filtering algorithms using shiftable kernels
Chaudhury, Kunal Narayan
2011-01-01
It was recently demonstrated in [4][arxiv:1105.4204] that the non-linear bilateral filter \\cite{Tomasi} can be efficiently implemented using an O(1) or constant-time algorithm. At the heart of this algorithm was the idea of approximating the Gaussian range kernel of the bilateral filter using trigonometric functions. In this letter, we explain how the idea in [4] can be extended to few other linear and non-linear filters [18,21,2]. While some of these filters have received a lot of attention in recent years, they are known to be computationally intensive. To extend the idea in \\cite{Chaudhury2011}, we identify a central property of trigonometric functions, called shiftability, that allows us to exploit the redundancy inherent in the filtering operations. In particular, using shiftable kernels, we show how certain complex filtering can be reduced to simply that of computing the moving sum of a stack of images. Each image in the stack is obtained through an elementary pointwise transform of the input image. Thi...
Geometric phases for non-degenerate and degenerate mixed states
Singh, K; Basu, K; Chen, J L; Du Jiang Feng
2003-01-01
This paper focuses on the geometric phase of general mixed states under unitary evolution. Here we analyze both non-degenerate as well as degenerate states. Starting with the non-degenerate case, we show that the usual procedure of subtracting the dynamical phase from the total phase to yield the geometric phase for pure states, does not hold for mixed states. To this end, we furnish an expression for the geometric phase that is gauge invariant. The parallelity conditions are shown to be easily derivable from this expression. We also extend our formalism to states that exhibit degeneracies. Here with the holonomy taking on a non-abelian character, we provide an expression for the geometric phase that is manifestly gauge invariant. As in the case of the non-degenerate case, the form also displays the parallelity conditions clearly. Finally, we furnish explicit examples of the geometric phases for both the non-degenerate as well as degenerate mixed states.
Kernel parameter dependence in spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...
Improving the Bandwidth Selection in Kernel Equating
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Generalized Derivative Based Kernelized Learning Vector Quantization
Schleif, Frank-Michael; Villmann, Thomas; Hammer, Barbara; Schneider, Petra; Biehl, Michael; Fyfe, Colin; Tino, Peter; Charles, Darryl; Garcia-Osoro, Cesar; Yin, Hujun
2010-01-01
We derive a novel derivative based version of kernelized Generalized Learning Vector Quantization (KGLVQ) as an effective, easy to interpret, prototype based and kernelized classifier. It is called D-KGLVQ and we provide generalization error bounds, experimental results on real world data, showing t
PALM KERNEL SHELL AS AGGREGATE FOR LIGHT
African Journals Online (AJOL)
of cement, sand, gravel andpalm kernel shells respectively gave the highest compressive strength of ... Keywords: Aggregate, Cement, Concrete, Sand, Palm Kernel Shell. ... delivered to the jOb Slte in a plastic ... structures, breakwaters, piers and docks .... related to cement content at a .... sheet and the summary is shown.
Panel data specifications in nonparametric kernel regression
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A kernel-based discriminant analysis method called kernel direct discriminant analysis is employed, which combines the merit of direct linear discriminant analysis with that of kernel trick. In order to demonstrate its better robustness to the complex and nonlinear variations of real face images, such as illumination, facial expression, scale and pose variations, experiments are carried out on the Olivetti Research Laboratory, Yale and self-built face databases. The results indicate that in contrast to kernel principal component analysis and kernel linear discriminant analysis, the method can achieve lower (7%) error rate using only a very small set of features. Furthermore, a new corrected kernel model is proposed to improve the recognition performance. Experimental results confirm its superiority (1% in terms of recognition rate) to other polynomial kernel models.
Parameter-Free Spectral Kernel Learning
Mao, Qi
2012-01-01
Due to the growing ubiquity of unlabeled data, learning with unlabeled data is attracting increasing attention in machine learning. In this paper, we propose a novel semi-supervised kernel learning method which can seamlessly combine manifold structure of unlabeled data and Regularized Least-Squares (RLS) to learn a new kernel. Interestingly, the new kernel matrix can be obtained analytically with the use of spectral decomposition of graph Laplacian matrix. Hence, the proposed algorithm does not require any numerical optimization solvers. Moreover, by maximizing kernel target alignment on labeled data, we can also learn model parameters automatically with a closed-form solution. For a given graph Laplacian matrix, our proposed method does not need to tune any model parameter including the tradeoff parameter in RLS and the balance parameter for unlabeled data. Extensive experiments on ten benchmark datasets show that our proposed two-stage parameter-free spectral kernel learning algorithm can obtain comparable...
kLog: A Language for Logical and Relational Learning with Kernels
Frasconi, Paolo; De Raedt, Luc; De Grave, Kurt
2012-01-01
kLog is a logical and relational language for kernel-based learning. It allows users to specify logical and relational learning problems at a high level in a declarative way. It builds on simple but powerful concepts: learning from interpretations, entity/relationship data modeling, logic programming and deductive databases (Prolog and Datalog), and graph kernels. kLog is a statistical relational learning system but unlike other statistical relational learning models, it does not represent a probability distribution directly. It is rather a kernel-based approach to learning that employs features derived from a grounded entity/relationship diagram. These features are derived using a novel technique called graphicalization: first, relational representations are transformed into graph based representations; subsequently, graph kernels are employed for defining feature spaces. kLog can use numerical and symbolic data, background knowledge in the form of Prolog or Datalog programs (as in inductive logic programmin...
Xu, Lin; Feng, Yanqiu; Liu, Xiaoyun; Kang, Lili; Chen, Wufan
2014-01-01
Accuracy of interpolation coefficients fitting to the auto-calibrating signal data is crucial for k-space-based parallel reconstruction. Both conventional generalized autocalibrating partially parallel acquisitions (GRAPPA) reconstruction that utilizes linear interpolation function and nonlinear GRAPPA (NLGRAPPA) reconstruction with polynomial kernel function are sensitive to interpolation window and often cannot consistently produce good results for overall acceleration factors. In this study, sparse multi-kernel learning is conducted within the framework of least squares support vector regression to fit interpolation coefficients as well as to reconstruct images robustly under different subsampling patterns and coil datasets. The kernel combination weights and interpolation coefficients are adaptively determined by efficient semi-infinite linear programming techniques. Experimental results on phantom and in vivo data indicate that the proposed method can automatically achieve an optimized compromise between noise suppression and residual artifacts for various sampling schemes. Compared with NLGRAPPA, our method is significantly less sensitive to the interpolation window and kernel parameters.
A substitute for the singular Green kernel in the Newtonian potential of celestial bodies
Huré, Jean-Marc
2012-01-01
The "point mass singularity" inherent in Newton's law for gravitation represents a major difficulty in accurately determining the potential and forces inside continuous bodies. Here we report a simple and efficient analytical method to bypass the singular Green kernel 1/|r-r'| inside the source without altering the nature of the interaction. We build an equivalent kernel made up of a "cool kernel", which is fully regular (and contains the long-range -GM/r asymptotic behavior), and the gradient of a "hyperkernel", which is also regular. Compared to the initial kernel, these two components are easily integrated over the source volume using standard numerical techniques. The demonstration is presented for three-dimensional distributions in cylindrical coordinates, which are well-suited to describing rotating bodies (stars, discs, asteroids, etc.) as commonly found in the Universe. An example of implementation is given. The case of axial symmetry is treated in detail, and the accuracy is checked by considering an...
DDoS detection based on wavelet kernel support vector machine
Institute of Scientific and Technical Information of China (English)
YANG Ming-hui; WANG Ru-chuan
2008-01-01
To enhance the detection accuracy and deduce false positive rate of distributed denial of service (DDoS) attack detection, a new machine learning method was proposed. With the analysis of support vector machine (SVM) and the wavelet kernel function theory, an admissive support vector kernel, which is a wavelet kernel constructed in this article, implements the combination of the wavelet technique with SVM. Then, wavelet support vector machine (WSVM) is applied to DDoS attack detections and as a classifying means to test the validity of the wavelet kernel function. Simulation experiments show that under the same conditions, the predictive ability of WSVM is improved and the computation burden is alleviated. The detection accuracy of WSVM is higher than the traditional SVM by about 4%, while its false positive is lower than the traditional SVM. Thus, for DDoS detections, WSVM shows better detection performance and is more adaptive to the changing network environment.
Pyridoxine neuropathy in rats: specific degeneration of sensory axons.
Windebank, A J; Low, P A; Blexrud, M D; Schmelzer, J D; Schaumburg, H H
1985-11-01
When rats received pyridoxine in doses large enough to cause neuropathy in humans, the animals developed gait ataxia that subsided after the toxin was withdrawn. By using quantitative histologic techniques, we found axonal degeneration of sensory system fibers and that the fibers derived from the ventral root were spared. Although the degeneration approached the dorsal root ganglion, neurons in the ganglion did not degenerate. We found no early decrease in oxygen consumption of nerve, suggesting that impaired oxidative metabolism was not the primary event.
Heat-kernel approach for scattering
Li, Wen-Du
2015-01-01
An approach for solving scattering problems, based on two quantum field theory methods, the heat kernel method and the scattering spectral method, is constructed. This approach has a special advantage: it is not only one single approach; it is indeed a set of approaches for solving scattering problems. Concretely, we build a bridge between a scattering problem and the heat kernel method, so that each method of calculating heat kernels can be converted into a method of solving a scattering problem. As applications, we construct two approaches for solving scattering problems based on two heat-kernel expansions: the Seeley-DeWitt expansion and the covariant perturbation theory. In order to apply the heat kernel method to scattering problems, we also calculate two off-diagonal heat-kernel expansions in the frames of the Seeley-DeWitt expansion and the covariant perturbation theory, respectively. Moreover, as an alternative application of the relation between heat kernels and partial-wave phase shifts presented in...
Ideal regularization for learning kernels from labels.
Pan, Binbin; Lai, Jianhuang; Shen, Lixin
2014-08-01
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The interaction kernel in the Bethe-Salpeter equation for quark-antiquark bound states is derived newly from QCD in the case where the quark and the antiquark are of different flavors. The technique of the derivation is the usage of the irreducible decomposition of the Green's functions involved in the Bethe-Salpeter equation satisfied by the quark-antiquark four-point Green's function. The interaction kernel derived is given a closed and explicit expression which shows a specific structure of the kernel since the kernel is represented in terms of the quark, antiquark and gluon propagators and some kinds of quark, antiquark and/or gluon three, four, five and six-point vertices. Therefore,the expression of the kernel is not only convenient for perturbative calculations, but also suitable for nonperturbative investigations.
Ahmed, Qasim Zeeshan
2013-01-01
In this letter, a new detector is proposed for amplifyand- forward (AF) relaying system when communicating with the assistance of relays. The major goal of this detector is to improve the bit error rate (BER) performance of the receiver. The probability density function is estimated with the help of kernel density technique. A generalized Gaussian kernel is proposed. This new kernel provides more flexibility and encompasses Gaussian and uniform kernels as special cases. The optimal window width of the kernel is calculated. Simulations results show that a gain of more than 1 dB can be achieved in terms of BER performance as compared to the minimum mean square error (MMSE) receiver when communicating over Rayleigh fading channels.
Kernel score statistic for dependent data.
Malzahn, Dörthe; Friedrichs, Stefanie; Rosenberger, Albert; Bickeböller, Heike
2014-01-01
The kernel score statistic is a global covariance component test over a set of genetic markers. It provides a flexible modeling framework and does not collapse marker information. We generalize the kernel score statistic to allow for familial dependencies and to adjust for random confounder effects. With this extension, we adjust our analysis of real and simulated baseline systolic blood pressure for polygenic familial background. We find that the kernel score test gains appreciably in power through the use of sequencing compared to tag-single-nucleotide polymorphisms for very rare single nucleotide polymorphisms with <1% minor allele frequency.
Kernel-based Maximum Entropy Clustering
Institute of Scientific and Technical Information of China (English)
JIANG Wei; QU Jiao; LI Benxi
2007-01-01
With the development of Support Vector Machine (SVM),the "kernel method" has been studied in a general way.In this paper,we present a novel Kernel-based Maximum Entropy Clustering algorithm (KMEC).By using mercer kernel functions,the proposed algorithm is firstly map the data from their original space to high dimensional space where the data are expected to be more separable,then perform MEC clustering in the feature space.The experimental results show that the proposed method has better performance in the non-hyperspherical and complex data structure.
Kernel adaptive filtering a comprehensive introduction
Liu, Weifeng; Haykin, Simon
2010-01-01
Online learning from a signal processing perspective There is increased interest in kernel learning algorithms in neural networks and a growing need for nonlinear adaptive algorithms in advanced signal processing, communications, and controls. Kernel Adaptive Filtering is the first book to present a comprehensive, unifying introduction to online learning algorithms in reproducing kernel Hilbert spaces. Based on research being conducted in the Computational Neuro-Engineering Laboratory at the University of Florida and in the Cognitive Systems Laboratory at McMaster University, O
Multiple Operator-valued Kernel Learning
Kadri, Hachem; Bach, Francis; Preux, Philippe
2012-01-01
This paper addresses the problem of learning a finite linear combination of operator-valued kernels. We study this problem in the case of kernel ridge regression for functional responses with a lr-norm constraint on the combination coefficients. We propose a multiple operator-valued kernel learning algorithm based on solving a system of linear operator equations by using a block coordinate descent procedure. We experimentally validate our approach on a functional regression task in the context of finger movement prediction in Brain-Computer Interface (BCI).
Polynomial Kernelizations for $\\MINF_1$ and $\\MNP$
Kratsch, Stefan
2009-01-01
The relation of constant-factor approximability to fixed-parameter tractability and kernelization is a long-standing open question. We prove that two large classes of constant-factor approximable problems, namely $\\MINF_1$ and $\\MNP$, including the well-known subclass $\\MSNP$, admit polynomial kernelizations for their natural decision versions. This extends results of Cai and Chen (JCSS 1997), stating that the standard parameterizations of problems in $\\MSNP$ and $\\MINF_1$ are fixed-parameter tractable, and complements recent research on problems that do not admit polynomial kernelizations (Bodlaender et al. ICALP 2008).
Approximating W projection as a separable kernel
Merry, Bruce
2015-01-01
W projection is a commonly-used approach to allow interferometric imaging to be accelerated by Fast Fourier Transforms (FFTs), but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid to high frequencies. We also show that hybrid imaging algorithms combining W projection with ...
Approximating W projection as a separable kernel
Merry, Bruce
2016-02-01
W projection is a commonly used approach to allow interferometric imaging to be accelerated by fast Fourier transforms, but it can require a huge amount of storage for convolution kernels. The kernels are not separable, but we show that they can be closely approximated by separable kernels. The error scales with the fourth power of the field of view, and so is small enough to be ignored at mid- to high frequencies. We also show that hybrid imaging algorithms combining W projection with either faceting, snapshotting, or W stacking allow the error to be made arbitrarily small, making the approximation suitable even for high-resolution wide-field instruments.
Intervertebral disc degeneration in dogs
Bergknut, Niklas
2011-01-01
Back pain is common in both dogs and humans, and is often associated with intervertebral disc (IVD) degeneration. The IVDs are essential structures of the spine and degeneration can ultimately result in diseases such as IVD herniation or spinal instability. In order to design new treatments halting
Institute of Scientific and Technical Information of China (English)
LUO Xin-Lian; BAI Hua; ZHAO Lei
2008-01-01
Regardless of the formation mechanism, an exotic object, the double degenerate star (DDS), is introduced and investigated, which is composed of baryonic matter and some unknown fermion dark matter. Different from the simple white dwarfs (WDs), there is additional gravitational force provided by the unknown fermion component inside DDSs, which may strongly affect the structure and the stability of such kind of objects. Many possible and strange observational phenomena connecting with them are concisely discussed. Similar to the normal WD, this object can also experience thermonuclear explosion as type Ia supernova explosion when DDS's mass exceeds the maximum mass that can be supported by electron degeneracy pressure. However, since the total mass of baryonic matter can be much lower than that of WD at Chandrasekhar mass limit, the peak luminosity should be much dimmer than what we expect before, which may throw a slight shadow on the standard candle of SN Ia in the research of cosmology.
Kernel map compression for speeding the execution of kernel-based methods.
Arif, Omar; Vela, Patricio A
2011-06-01
The use of Mercer kernel methods in statistical learning theory provides for strong learning capabilities, as seen in kernel principal component analysis and support vector machines. Unfortunately, after learning, the computational complexity of execution through a kernel is of the order of the size of the training set, which is quite large for many applications. This paper proposes a two-step procedure for arriving at a compact and computationally efficient execution procedure. After learning in the kernel space, the proposed extension exploits the universal approximation capabilities of generalized radial basis function neural networks to efficiently approximate and replace the projections onto the empirical kernel map used during execution. Sample applications demonstrate significant compression of the kernel representation with graceful performance loss.
Defective Kernel Mutants of Maize II. Morphological and Embryo Culture Studies.
Sheridan, W F; Neuffer, M G
1980-08-01
This report presents the initial results of our study of the immature kernel stage of 150 defective kernel maize mutants. They are single gene, recessive mutants that map throughout the genome, defective in both endosperm and embryo development and, for the most part, lethal (Neuffer and Sheridan 1980). All can be distinguished on immature ears, and 85% of them reveal a mutant phenotype within 11 to 17 days post-pollination. Most have immature kernels that are smaller and lighter in color than their normal counterparts. Forty of the mutants suffer from their defects early in kernel development and are blocked in embryogenesis before their primordia differentiate, or, if primordia are formed, they are unable to germinate when cultured as immature embryos or tested at maturity; a few begin embryo degeneration prior to the time that mutant kernels became visually distinguishable. The others express the associated lesion later in kernel development and form at least one leaf primordium by the time kernels are distinguishable and will germinate when cultured or tested at maturity. In most cases, on a fresh weight basis, the mutants have embryos that are more severely defective than the endosperm; their embryos usually are no more than one-half to two-thirds the size, and lag behind by one or two developmental stages. in comparison with embryos in normal kernels from the same ear. One hundred and two mutants were examined by culturing embryos on basal and enriched media; 21 simply enlarged or completely failed to grow on any of the media tested; and 81 produced shoots and roots on at least one medium. Many grew equally well on basal and enriched media; 16 grew at a faster rate on basal medium and 23 displayed a superior growth on enriched medium. Among the latter group, 10 may be auxotrophs. One of these mutants and another mutant isolated by E. H. Coe are proline-requiring mutants, allelic to pro-1. Considering their diversity of expression as evidenced by their
The Linux kernel as flexible product-line architecture
Jonge, M. de
2002-01-01
The Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what component they real
7 CFR 51.2296 - Three-fourths half kernel.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more...
7 CFR 981.401 - Adjusted kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume...
7 CFR 51.1403 - Kernel color classification.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the...
NLO corrections to the Kernel of the BKP-equations
Energy Technology Data Exchange (ETDEWEB)
Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)
2012-10-02
We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.
Relative n-widths of periodic convolution classes with NCVD-kernel and B-kernel
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
In this paper,we consider the relative n-widths of two kinds of periodic convolution classes,Kp(K) and Bp(G),whose convolution kernels are NCVD-kernel K and B-kernel G. The asymptotic estimations of Kn(Kp(K),Kp(K))q and Kn(Bp(G),Bp(G))q are obtained for p=1 and ∞,1≤ q≤∞.
Reproducing Kernel for D2(Ω, ρ) and Metric Induced by Reproducing Kernel
Institute of Scientific and Technical Information of China (English)
ZHAO Zhen Gang
2009-01-01
An important property of the reproducing kernel of D2(Ω, ρ) is obtained and the reproducing kernels for D2(Ω, ρ) are calculated when Ω = Bn × Bn and ρ are some special functions. A reproducing kernel is used to construct a semi-positive definite matrix and a distance function defined on Ω×Ω. An inequality is obtained about the distance function and the pseudodistance induced by the matrix.
Local Kernel for Brains Classification in Schizophrenia
Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.
In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.
Facts about Age-Related Macular Degeneration
... Degeneration (AMD) > Facts About Age-Related Macular Degeneration Facts About Age-Related Macular Degeneration This information was ... an Eye Care Professional Last Reviewed: September 2015 Fact Sheet Blurb The National Eye Institute (NEI) is ...
Drakon, A. V.; Kiverin, A. D.; Yakovenko, I. S.
2016-11-01
The basic question raised in the paper concerns the origins of exothermal reaction kernels and the mechanisms of detonation onset behind the reflected shock wave in shock-tube experiments. Using the conventional experimental technique, it is obtained that in the certain diapason of conditions behind the reflected shocks a so-called “mild ignition” arises which is characterized by the detonation formation from the kernel distant from the end-wall. The results of 2-D and 3-D simulations of the flow evolution behind the incident and reflected shocks allow formulation of the following scenario of ignition kernels formation. Initial stage during and after the diaphragm rupture is characterized by a set of non-steady gasdynamical processes. As a result, the flow behind the incident shock occurs to be saturated with temperature perturbations. Further evolution of these perturbations provides generating of the shear stresses in the flow accompanied with intensification of velocity and temperature perturbations. After reflection the shock wave interacts with the formed kernels of higher temperature and more pronounced kernels arise on the background of reactivity profile determined by moving reflected shock. Exothermal reaction starts inside such kernels and propagates into the ambient medium as a spontaneous ignition wave with minimum initial speed equal to the reflected shock wave speed.
Pai, Akshay; Sommer, Stefan; Sorensen, Lauge; Darkner, Sune; Sporring, Jon; Nielsen, Mads
2016-06-01
In this paper, we propose a multi-scale, multi-kernel shape, compactly supported kernel bundle framework for stationary velocity field-based image registration (Wendland kernel bundle stationary velocity field, wKB-SVF). We exploit the possibility of directly choosing kernels to construct a reproducing kernel Hilbert space (RKHS) instead of imposing it from a differential operator. The proposed framework allows us to minimize computational cost without sacrificing the theoretical foundations of SVF-based diffeomorphic registration. In order to recover deformations occurring at different scales, we use compactly supported Wendland kernels at multiple scales and orders to parameterize the velocity fields, and the framework allows simultaneous optimization over all scales. The performance of wKB-SVF is extensively compared to the 14 non-rigid registration algorithms presented in a recent comparison paper. On both MGH10 and CUMC12 datasets, the accuracy of wKB-SVF is improved when compared to other registration algorithms. In a disease-specific application for intra-subject registration, atrophy scores estimated using the proposed registration scheme separates the diagnostic groups of Alzheimer's and normal controls better than the state-of-the-art segmentation technique. Experimental results show that wKB-SVF is a robust, flexible registration framework that allows theoretically well-founded and computationally efficient multi-scale representation of deformations and is equally well-suited for both inter- and intra-subject image registration.
Discriminant Kernel Assignment for Image Coding.
Deng, Yue; Zhao, Yanyu; Ren, Zhiquan; Kong, Youyong; Bao, Feng; Dai, Qionghai
2017-06-01
This paper proposes discriminant kernel assignment (DKA) in the bag-of-features framework for image representation. DKA slightly modifies existing kernel assignment to learn width-variant Gaussian kernel functions to perform discriminant local feature assignment. When directly applying gradient-descent method to solve DKA, the optimization may contain multiple time-consuming reassignment implementations in iterations. Accordingly, we introduce a more practical way to locally linearize the DKA objective and the difficult task is cast as a sequence of easier ones. Since DKA only focuses on the feature assignment part, it seamlessly collaborates with other discriminative learning approaches, e.g., discriminant dictionary learning or multiple kernel learning, for even better performances. Experimental evaluations on multiple benchmark datasets verify that DKA outperforms other image assignment approaches and exhibits significant efficiency in feature coding.
Multiple Kernel Spectral Regression for Dimensionality Reduction
Directory of Open Access Journals (Sweden)
Bing Liu
2013-01-01
Full Text Available Traditional manifold learning algorithms, such as locally linear embedding, Isomap, and Laplacian eigenmap, only provide the embedding results of the training samples. To solve the out-of-sample extension problem, spectral regression (SR solves the problem of learning an embedding function by establishing a regression framework, which can avoid eigen-decomposition of dense matrices. Motivated by the effectiveness of SR, we incorporate multiple kernel learning (MKL into SR for dimensionality reduction. The proposed approach (termed MKL-SR seeks an embedding function in the Reproducing Kernel Hilbert Space (RKHS induced by the multiple base kernels. An MKL-SR algorithm is proposed to improve the performance of kernel-based SR (KSR further. Furthermore, the proposed MKL-SR algorithm can be performed in the supervised, unsupervised, and semi-supervised situation. Experimental results on supervised classification and semi-supervised classification demonstrate the effectiveness and efficiency of our algorithm.
Quantum kernel applications in medicinal chemistry.
Huang, Lulu; Massa, Lou
2012-07-01
Progress in the quantum mechanics of biological molecules is being driven by computational advances. The notion of quantum kernels can be introduced to simplify the formalism of quantum mechanics, making it especially suitable for parallel computation of very large biological molecules. The essential idea is to mathematically break large biological molecules into smaller kernels that are calculationally tractable, and then to represent the full molecule by a summation over the kernels. The accuracy of the kernel energy method (KEM) is shown by systematic application to a great variety of molecular types found in biology. These include peptides, proteins, DNA and RNA. Examples are given that explore the KEM across a variety of chemical models, and to the outer limits of energy accuracy and molecular size. KEM represents an advance in quantum biology applicable to problems in medicine and drug design.
Kernel method-based fuzzy clustering algorithm
Institute of Scientific and Technical Information of China (English)
Wu Zhongdong; Gao Xinbo; Xie Weixin; Yu Jianping
2005-01-01
The fuzzy C-means clustering algorithm(FCM) to the fuzzy kernel C-means clustering algorithm(FKCM) to effectively perform cluster analysis on the diversiform structures are extended, such as non-hyperspherical data, data with noise, data with mixture of heterogeneous cluster prototypes, asymmetric data, etc. Based on the Mercer kernel, FKCM clustering algorithm is derived from FCM algorithm united with kernel method. The results of experiments with the synthetic and real data show that the FKCM clustering algorithm is universality and can effectively unsupervised analyze datasets with variform structures in contrast to FCM algorithm. It is can be imagined that kernel-based clustering algorithm is one of important research direction of fuzzy clustering analysis.
Kernel representations for behaviors over finite rings
Kuijper, M.; Pinto, R.; Polderman, J.W.; Yamamoto, Y.
2006-01-01
In this paper we consider dynamical systems finite rings. The rings that we study are the integers modulo a power of a given prime. We study the theory of representations for such systems, in particular kernel representations.
Ensemble Approach to Building Mercer Kernels
National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...
Convolution kernels for multi-wavelength imaging
National Research Council Canada - National Science Library
Boucaud, Alexandre; Bocchio, Marco; Abergel, Alain; Orieux, François; Dole, Hervé; Hadj-Youcef, Mohamed Amine
2016-01-01
.... Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been...
Sogi, Dalbir Singh; Siddiq, Muhammad; Greiby, Ibrahim; Dolan, Kirk D
2013-12-01
Mango processing produces significant amount of waste (peels and kernels) that can be utilized for the production of value-added ingredients for various food applications. Mango peel and kernel were dried using different techniques, such as freeze drying, hot air, vacuum and infrared. Freeze dried mango waste had higher antioxidant properties than those from other techniques. The ORAC values of peel and kernel varied from 418-776 and 1547-1819 μmol TE/g db. The solubility of freeze dried peel and kernel powder was the highest. The water and oil absorption index of mango waste powders ranged between 1.83-6.05 and 1.66-3.10, respectively. Freeze dried powders had the lowest bulk density values among different techniques tried. The cabinet dried waste powders can be potentially used in food products to enhance their nutritional and antioxidant properties.
Difference image analysis: Automatic kernel design using information criteria
Bramich, D M; Alsubai, K A; Bachelet, E; Mislis, D; Parley, N
2015-01-01
We present a selection of methods for automatically constructing an optimal kernel model for difference image analysis which require very few external parameters to control the kernel design. Each method consists of two components; namely, a kernel design algorithm to generate a set of candidate kernel models, and a model selection criterion to select the simplest kernel model from the candidate models that provides a sufficiently good fit to the target image. We restricted our attention to the case of solving for a spatially-invariant convolution kernel composed of delta basis functions, and we considered 19 different kernel solution methods including six employing kernel regularisation. We tested these kernel solution methods by performing a comprehensive set of image simulations and investigating how their performance in terms of model error, fit quality, and photometric accuracy depends on the properties of the reference and target images. We find that the irregular kernel design algorithm employing unreg...
Preparing UO2 kernels by gelcasting
Institute of Scientific and Technical Information of China (English)
GUO Wenli; LIANG Tongxiang; ZHAO Xingyu; HAO Shaochang; LI Chengliang
2009-01-01
A process named gel-casting has been developed for the production of dense UO2 kernels for the high-ten-temperature gas-cooled reactor. Compared with the sol-gel process, the green microspheres can be got by dispersing the U3O8 slurry in gelcasting process, which means that gelcasting is a more facilitative process with less waste in fabricating UO2 kernels. The heat treatment.
The Bergman kernel functions on Hua domains
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
We get the Bergman kernel functions in explicit formulas on four types of Hua domain.There are two key steps: First, we give the holomorphic automorphism groups of four types of Hua domain; second, we introduce the concept of semi-Reinhardt domain and give their complete orthonormal systems. Based on these two aspects we obtain the Bergman kernel function in explicit formulas on Hua domains.
Fractal Weyl law for Linux Kernel Architecture
Ermann, L; Shepelyansky, D L
2010-01-01
We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be $\
Varying kernel density estimation on ℝ+
Mnatsakanov, Robert; Sarkisian, Khachatur
2015-01-01
In this article a new nonparametric density estimator based on the sequence of asymmetric kernels is proposed. This method is natural when estimating an unknown density function of a positive random variable. The rates of Mean Squared Error, Mean Integrated Squared Error, and the L1-consistency are investigated. Simulation studies are conducted to compare a new estimator and its modified version with traditional kernel density construction. PMID:26740729
Adaptively Learning the Crowd Kernel
Tamuz, Omer; Belongie, Serge; Shamir, Ohad; Kalai, Adam Tauman
2011-01-01
We introduce an algorithm that, given n objects, learns a similarity matrix over all n^2 pairs, from crowdsourced data alone. The algorithm samples responses to adaptively chosen triplet-based relative-similarity queries. Each query has the form "is object 'a' more similar to 'b' or to 'c'?" and is chosen to be maximally informative given the preceding responses. The output is an embedding of the objects into Euclidean space (like MDS); we refer to this as the "crowd kernel." The runtime (empirically observed to be linear) and cost (about $0.15 per object) of the algorithm are small enough to permit its application to databases of thousands of objects. The distance matrix provided by the algorithm allows for the development of an intuitive and powerful sequential, interactive search algorithm which we demonstrate for a variety of visual stimuli. We present quantitative results that demonstrate the benefit in cost and time of our approach compared to a nonadaptive approach. We also show the ability of our appr...
Evaluating the Gradient of the Thin Wire Kernel
Wilton, Donald R.; Champagne, Nathan J.
2008-01-01
Recently, a formulation for evaluating the thin wire kernel was developed that employed a change of variable to smooth the kernel integrand, canceling the singularity in the integrand. Hence, the typical expansion of the wire kernel in a series for use in the potential integrals is avoided. The new expression for the kernel is exact and may be used directly to determine the gradient of the wire kernel, which consists of components that are parallel and radial to the wire axis.
On the Inclusion Relation of Reproducing Kernel Hilbert Spaces
Zhang, Haizhang; Zhao, Liang
2011-01-01
To help understand various reproducing kernels used in applied sciences, we investigate the inclusion relation of two reproducing kernel Hilbert spaces. Characterizations in terms of feature maps of the corresponding reproducing kernels are established. A full table of inclusion relations among widely-used translation invariant kernels is given. Concrete examples for Hilbert-Schmidt kernels are presented as well. We also discuss the preservation of such a relation under various operations of ...
Directory of Open Access Journals (Sweden)
F. Z. Geng
2012-01-01
Full Text Available We introduce a new method for solving Riccati differential equations, which is based on reproducing kernel method and quasilinearization technique. The quasilinearization technique is used to reduce the Riccati differential equation to a sequence of linear problems. The resulting sets of differential equations are treated by using reproducing kernel method. The solutions of Riccati differential equations obtained using many existing methods give good approximations only in the neighborhood of the initial position. However, the solutions obtained using the present method give good approximations in a larger interval, rather than a local vicinity of the initial position. Numerical results compared with other methods show that the method is simple and effective.
Hanft, J M; Jones, R J
1986-06-01
Kernels cultured in vitro were induced to abort by high temperature (35 degrees C) and by culturing six kernels/cob piece. Aborting kernels failed to enter a linear phase of dry mass accumulation and had a final mass that was less than 6% of nonaborting field-grown kernels. Kernels induced to abort by high temperature failed to synthesize starch in the endosperm and had elevated sucrose concentrations and low fructose and glucose concentrations in the pedicel during early growth compared to nonaborting kernels. Kernels induced to abort by high temperature also had much lower pedicel soluble acid invertase activities than did nonaborting kernels. These results suggest that high temperature during the lag phase of kernel growth may impair the process of sucrose unloading in the pedicel by indirectly inhibiting soluble acid invertase activity and prevent starch synthesis in the endosperm. Kernels induced to abort by culturing six kernels/cob piece had reduced pedicel fructose, glucose, and sucrose concentrations compared to kernels from field-grown ears. These aborting kernels also had a lower pedicel soluble acid invertase activity compared to nonaborting kernels from the same cob piece and from field-grown ears. The low invertase activity in pedicel tissue of the aborting kernels was probably caused by a lack of substrate (sucrose) for the invertase to cleave due to the intense competition for available assimilates. In contrast to kernels cultured at 35 degrees C, aborting kernels from cob pieces containing all six kernels accumulated starch in a linear fashion. These results indicate that kernels cultured six/cob piece abort because of an inadequate supply of sugar and are similar to apical kernels from field-grown ears that often abort prior to the onset of linear growth.
[Age related macular degeneration].
Sayen, Alexandra; Hubert, Isabelle; Berrod, Jean-Paul
2011-02-01
Age-related macular degeneration (ARMD) is a multifactorial disease caused by a combination of genetic and environmental factors. It is the first cause of blindness in patients over 50 in the western world. The disease has been traditionally classified into early and late stages with dry (atrophic) and wet (neovascular) forms: neovascular form is characterized by new blood vessels development under the macula (choroidal neovascularisation) which lead to a rapid decline of vision associated with metamorphopsia and requiring an urgent ophtalmological examination. Optical coherence tomography is now one of the most important part of the examination for diagnosis and treatment. Patient with age related maculopathy should consider taking a dietary supplement such that used in AREDS. The treatment of the wet ARMD has largely beneficied since year 2006 of anti-VEGF (vascular endothelial growth factor) molecules such as ranibizumab or bevacizumab given as repeated intravitreal injections. A systematic follow up each 4 to 8 week in required for several years. There is no effective treatment at the moment for dry AMD. For patients with binocular visual acuity under 60/200 rehabilitation includes low vision specialist, vision aids and psychological support.
A spectral-spatial kernel-based method for hyperspectral imagery classification
Li, Li; Ge, Hongwei; Gao, Jianqiang
2017-02-01
Spectral-based classification methods have gained increasing attention in hyperspectral imagery classification. Nevertheless, the spectral cannot fully represent the inherent spatial distribution of the imagery. In this paper, a spectral-spatial kernel-based method for hyperspectral imagery classification is proposed. Firstly, the spatial feature was extracted by using area median filtering (AMF). Secondly, the result of the AMF was used to construct spatial feature patch according to different window sizes. Finally, using the kernel technique, the spectral feature and the spatial feature were jointly used for the classification through a support vector machine (SVM) formulation. Therefore, for hyperspectral imagery classification, the proposed method was called spectral-spatial kernel-based support vector machine (SSF-SVM). To evaluate the proposed method, experiments are performed on three hyperspectral images. The experimental results show that an improvement is possible with the proposed technique in most of the real world classification problems.
Single pass kernel -means clustering method
Indian Academy of Sciences (India)
T Hitendra Sarma; P Viswanath; B Eswara Reddy
2013-06-01
In unsupervised classiﬁcation, kernel -means clustering method has been shown to perform better than conventional -means clustering method in identifying non-isotropic clusters in a data set. The space and time requirements of this method are $O(n^2)$, where is the data set size. Because of this quadratic time complexity, the kernel -means method is not applicable to work with large data sets. The paper proposes a simple and faster version of the kernel -means clustering method, called single pass kernel k-means clustering method. The proposed method works as follows. First, a random sample $\\mathcal{S}$ is selected from the data set $\\mathcal{D}$. A partition $\\Pi_{\\mathcal{S}}$ is obtained by applying the conventional kernel -means method on the random sample $\\mathcal{S}$. The novelty of the paper is, for each cluster in $\\Pi_{\\mathcal{S}}$, the exact cluster center in the input space is obtained using the gradient descent approach. Finally, each unsampled pattern is assigned to its closest exact cluster center to get a partition of the entire data set. The proposed method needs to scan the data set only once and it is much faster than the conventional kernel -means method. The time complexity of this method is $O(s^2+t+nk)$ where is the size of the random sample $\\mathcal{S}$, is the number of clusters required, and is the time taken by the gradient descent method (to ﬁnd exact cluster centers). The space complexity of the method is $O(s^2)$. The proposed method can be easily implemented and is suitable for large data sets, like those in data mining applications. Experimental results show that, with a small loss of quality, the proposed method can signiﬁcantly reduce the time taken than the conventional kernel -means clustering method. The proposed method is also compared with other recent similar methods.
Kernel-Based Reconstruction of Graph Signals
Romero, Daniel; Ma, Meng; Giannakis, Georgios B.
2017-02-01
A number of applications in engineering, social sciences, physics, and biology involve inference over networks. In this context, graph signals are widely encountered as descriptors of vertex attributes or features in graph-structured data. Estimating such signals in all vertices given noisy observations of their values on a subset of vertices has been extensively analyzed in the literature of signal processing on graphs (SPoG). This paper advocates kernel regression as a framework generalizing popular SPoG modeling and reconstruction and expanding their capabilities. Formulating signal reconstruction as a regression task on reproducing kernel Hilbert spaces of graph signals permeates benefits from statistical learning, offers fresh insights, and allows for estimators to leverage richer forms of prior information than existing alternatives. A number of SPoG notions such as bandlimitedness, graph filters, and the graph Fourier transform are naturally accommodated in the kernel framework. Additionally, this paper capitalizes on the so-called representer theorem to devise simpler versions of existing Thikhonov regularized estimators, and offers a novel probabilistic interpretation of kernel methods on graphs based on graphical models. Motivated by the challenges of selecting the bandwidth parameter in SPoG estimators or the kernel map in kernel-based methods, the present paper further proposes two multi-kernel approaches with complementary strengths. Whereas the first enables estimation of the unknown bandwidth of bandlimited signals, the second allows for efficient graph filter selection. Numerical tests with synthetic as well as real data demonstrate the merits of the proposed methods relative to state-of-the-art alternatives.
Degenerate pseudo-Riemannian metrics
Hervik, Sigbjorn; Yamamoto, Kei
2014-01-01
In this paper we study pseudo-Riemannian spaces with a degenerate curvature structure i.e. there exists a continuous family of metrics having identical polynomial curvature invariants. We approach this problem by utilising an idea coming from invariant theory. This involves the existence of a boost, the existence of this boost is assumed to extend to a neighbourhood. This approach proves to be very fruitful: It produces a class of metrics containing all known examples of degenerate metrics. To date, only Kundt and Walker metrics have been given, however, our study gives a plethora of examples showing that degenerate metrics extend beyond the Kundt and Walker examples. The approach also gives a useful criterion for a metric to be degenerate. Specifically, we use this to study the subclass of VSI and CSI metrics (i.e., spaces where polynomial curvature invariants are all vanishing or constants, respectively).
Age-Related Macular Degeneration
... version of this page please turn Javascript on. Age-related Macular Degeneration About AMD Click for more ... a leading cause of vision loss among people age 60 and older. It causes damage to the ...
Komatitsch, Dimitri
2016-06-13
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
Komatitsch, Dimitri; Xie, Zhinan; Bozdaǧ, Ebru; Sales de Andrade, Elliott; Peter, Daniel; Liu, Qinya; Tromp, Jeroen
2016-09-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the Kα sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersion-only kernels.
On the degenerate phase boundaries
Ma, Y; Kuang, Z; Ma, Yongge; Liang, Canbin; Kuang, Zhiquan
1999-01-01
The structure of the phase boundary between degenerate and non-degenerate regions in Ashtekar's gravity has been studied by Bengtsson and Jacobson who conjectured that the "phase boundary" should always be null. In this paper, we reformulate the reparametrization procedure in the mapping language and distinguish a phase boundary from its image. It is shown that the image has to be null, while the nullness of the phase boundary requries more suitable criterion.
Energy Technology Data Exchange (ETDEWEB)
Welsch, Goetz Hannes [Medical University of Vienna, MR Center - High Field MR, Department of Radiology, Vienna (Austria); University of Erlangen, Department of Trauma Surgery, Erlangen (Germany); Trattnig, Siegfried; Goed, Sabine; Stelzeneder, David [Medical University of Vienna, MR Center - High Field MR, Department of Radiology, Vienna (Austria); Paternostro-Sluga, Tatjana [Medical University of Vienna, Department of Physical Therapy, Vienna (Austria); Bohndorf, Klaus [Klinikum Augsburg, Department of Radiology, Augsburg (Germany); Mamisch, Tallal Charles [Medical University of Vienna, MR Center - High Field MR, Department of Radiology, Vienna (Austria); University of Berne, Department of Orthopedic Surgery, Berne (Switzerland)
2011-05-15
To assess, compare and correlate quantitative T2 and T2* relaxation time measurements of intervertebral discs (IVDs) in patients suffering from low back pain, with respect to the IVD degeneration as assessed by the morphological Pfirrmann Score. Special focus was on the spatial variation of T2 and T2* between the annulus fibrosus (AF) and the nucleus pulposus (NP). Thirty patients (mean age: 38.1 {+-} 9.1 years; 20 female, 10 male) suffering from low back pain were included. Morphological (sagittal T1-FSE, sagittal and axial T2-FSE) and biochemical (sagittal T2- and T2* mapping) MRI was performed at 3 Tesla covering IVDs L1-L2 to L5-S1. All IVDs were morphologically classified using the Pfirrmann score. Region-of-interest (ROI) analysis was performed on midsagittal T2 and T2* maps at five ROIs from anterior to posterior to obtain information on spatial variation between the AF and the NP. Statistical analysis-of-variance and Pearson correlation was performed. The spatial variation as an increase in T2 and T2* values from the AF to the NP was highest at Pfirmann grade I and declined at higher Pfirmann grades II-IV (p < 0.05). With increased IVD degeneration, T2 and T2* revealed a clear differences in the NP, whereas T2* was additionally able to depict changes in the posterior AF. Correlation between T2 and T2* showed a medium Pearson's correlation (0.210 to 0.356 [p < 0.001]). The clear differentiation of IVD degeneration and the possible quantification by means of T2 and fast T2* mapping may provide a new tool for follow-up therapy protocols in patients with low back pain. (orig.)
Object classification and detection with context kernel descriptors
DEFF Research Database (Denmark)
Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping
2014-01-01
Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...
OS X and iOS Kernel Programming
Halvorsen, Ole Henry
2011-01-01
OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i
Fast Computation of Global Sensitivity Kernel Database Based on Spectral-Element Simulations
Sales de Andrade, Elliott; Liu, Qinya
2017-07-01
Finite-frequency sensitivity kernels, a theoretical improvement from simple infinitely thin ray paths, have been used extensively in recent global and regional tomographic inversions. These sensitivity kernels provide more consistent and accurate interpretation of a growing number of broadband measurements, and are critical in mapping 3D heterogeneous structures of the mantle. Based on Born approximation, the calculation of sensitivity kernels requires the interaction of the forward wavefield and an adjoint wavefield generated by placing adjoint sources at stations. Both fields can be obtained accurately through numerical simulations of seismic wave propagation, particularly important for kernels of phases that cannot be sufficiently described by ray theory (such as core-diffracted waves). However, the total number of forward and adjoint numerical simulations required to build kernels for individual source-receiver pairs and to form the design matrix for classical tomography is computationally unaffordable. In this paper, we take advantage of the symmetry of 1D reference models, perform moment tensor forward and point force adjoint spectral-element simulations, and save six-component strain fields only on the equatorial plane based on the open-source spectral-element simulation package, SPECFEM3D_GLOBE. Sensitivity kernels for seismic phases at any epicentral distance can be efficiently computed by combining forward and adjoint strain wavefields from the saved strain field database, which significantly reduces both the number of simulations and the amount of storage required for global tomographic problems. Based on this technique, we compute traveltime, amplitude and/or boundary kernels of isotropic and radially anisotropic elastic parameters for various (P, S, P_{diff}, S_{diff}, depth, surface-reflected, surface wave, S 660 S boundary, etc.) phases for 1D ak135 model, in preparation for future global tomographic inversions.
Wilson's disease (hepatolenticular degeneration).
Herron, B E
1976-01-01
Wilson's disease, or hepatolenticular degeneration, is a rare inherited disorder of copper metabolism which usually affects young people. Excess copper accumulates in the tissues, primarily in the liver, brain, and cornea. This copper deposition results in a wide range of hepatic and neurological symptoms, and may produce psychiatric illness. Hepatic involvement often occurs in childhood, while neurological deficits generally are detected at a later age. The disease is inherited in an autosomal recessive fashion. Ocular findings are of particular importance because the corneal copper deposition, forming the Kayser-Fleischer ring,is the only pathognomonic sign of the disease. The structure of the ring and the presence of copper have been well established. An anterior capsular deposition of copper in the lens results in a characteristic sunflower cataract in some of these patients. Other ocular abnormalities have been described but are much less common. The pathogenesis of the disease and the basic genetic defect remain obscure. It is clear that there is excess copper in the tissues, but the mechanism of its deposition is unknown. It is in some way associated with a failure to synthesize the serum copper protein ceruloplasmin normally. Another theory suggests that an abnormal protein with a high affinity for copper may bind the metal in the tissues. The diagnosis may be suggested by the clinical manifestations and confirmed by the presence of a Kayser-Fleischer ring. In the absence of these findings biochemical determinations are necessary. The most important of these are the serum ceruloplasmin, the urinary copper, and the hepatic copper concentration on biopsy. Treatment consists in the administration of the copper chelating agent, penicillamine, and the avoidance of a high copper intake. This usually results in marked clinical improvement if irreversible tissue damage has not occurred. Maintenance therapy for life is necessary in order to continue the negative
The scalar field kernel in cosmological spaces
Energy Technology Data Exchange (ETDEWEB)
Koksma, Jurjen F; Prokopec, Tomislav [Institute for Theoretical Physics (ITP) and Spinoza Institute, Utrecht University, Postbus 80195, 3508 TD Utrecht (Netherlands); Rigopoulos, Gerasimos I [Helsinki Institute of Physics, University of Helsinki, PO Box 64, FIN-00014 (Finland)], E-mail: J.F.Koksma@phys.uu.nl, E-mail: T.Prokopec@phys.uu.nl, E-mail: gerasimos.rigopoulos@helsinki.fi
2008-06-21
We construct the quantum-mechanical evolution operator in the functional Schroedinger picture-the kernel-for a scalar field in spatially homogeneous FLRW spacetimes when the field is (a) free and (b) coupled to a spacetime-dependent source term. The essential element in the construction is the causal propagator, linked to the commutator of two Heisenberg picture scalar fields. We show that the kernels can be expressed solely in terms of the causal propagator and derivatives of the causal propagator. Furthermore, we show that our kernel reveals the standard light cone structure in FLRW spacetimes. We finally apply the result to Minkowski spacetime, to de Sitter spacetime and calculate the forward time evolution of the vacuum in a general FLRW spacetime.
Robust Visual Tracking via Fuzzy Kernel Representation
Directory of Open Access Journals (Sweden)
Zhiqiang Wen
2013-05-01
Full Text Available A robust visual kernel tracking approach is presented for solving the problem of existing background pixels in object model. At first, after definition of fuzzy set on image is given, a fuzzy factor is embedded into object model to form the fuzzy kernel representation. Secondly, a fuzzy membership functions are generated by center-surround approach and log likelihood ratio of feature distributions. Thirdly, details about fuzzy kernel tracking algorithm is provided. After that, methods of parameter selection and performance evaluation for tracking algorithm are proposed. At last, a mass of experimental results are done to show our method can reduce the influence of the incomplete representation of object model via integrating both color features and background features.
Fractal Weyl law for Linux Kernel architecture
Ermann, L.; Chepelianskii, A. D.; Shepelyansky, D. L.
2011-01-01
We study the properties of spectrum and eigenstates of the Google matrix of a directed network formed by the procedure calls in the Linux Kernel. Our results obtained for various versions of the Linux Kernel show that the spectrum is characterized by the fractal Weyl law established recently for systems of quantum chaotic scattering and the Perron-Frobenius operators of dynamical maps. The fractal Weyl exponent is found to be ν ≈ 0.65 that corresponds to the fractal dimension of the network d ≈ 1.3. An independent computation of the fractal dimension by the cluster growing method, generalized for directed networks, gives a close value d ≈ 1.4. The eigenmodes of the Google matrix of Linux Kernel are localized on certain principal nodes. We argue that the fractal Weyl law should be generic for directed networks with the fractal dimension d < 2.
Optoacoustic inversion via Volterra kernel reconstruction
Melchert, O; Roth, B
2016-01-01
In this letter we address the numeric inversion of optoacoustic signals to initial stress profiles. Therefore we put under scrutiny the optoacoustic kernel reconstruction problem in the paraxial approximation of the underlying wave-equation. We apply a Fourier-series expansion of the optoacoustic Volterra kernel and obtain the respective expansion coefficients for a given "apparative" setup by performing a gauge procedure using synthetic input data. The resulting effective kernel is subsequently used to solve the optoacoustic source reconstruction problem for general signals. We verify the validity of the proposed inversion protocol for synthetic signals and explore the feasibility of our approach to also account for the diffraction transformation of signals beyond the paraxial approximation.
Tile-Compressed FITS Kernel for IRAF
Seaman, R.
2011-07-01
The Flexible Image Transport System (FITS) is a ubiquitously supported standard of the astronomical community. Similarly, the Image Reduction and Analysis Facility (IRAF), developed by the National Optical Astronomy Observatory, is a widely used astronomical data reduction package. IRAF supplies compatibility with FITS format data through numerous tools and interfaces. The most integrated of these is IRAF's FITS image kernel that provides access to FITS from any IRAF task that uses the basic IMIO interface. The original FITS kernel is a complex interface of purpose-built procedures that presents growing maintenance issues and lacks recent FITS innovations. A new FITS kernel is being developed at NOAO that is layered on the CFITSIO library from the NASA Goddard Space Flight Center. The simplified interface will minimize maintenance headaches as well as add important new features such as support for the FITS tile-compressed (fpack) format.
THE NUMERICAL SOLUTION FOR A PARTIAL INTEGRO-DIFFERENTIAL EQUATION WITH A WEAKLY SINGULAR KERNEL
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In this paper, a first order semi-discrete method of a partial integro-differential equation with a weakly singular kernel is considered. We apply Galerkin spectral method in one direction, and the inversion technique for the Laplace transform in another direction, the result of the numerical experiment proves the accuracy of this method.
An hp-adaptive strategy for the solution of the exact kernel curved wire Pocklington equation
Lahaye, D.; Hemker, P.W.
2007-01-01
In this paper we introduce an adaptive method for the numerical solution of the Pocklington integro-differential equation with exact kernel for the current induced in a smoothly curved thin wire antenna. The hp-adaptive technique is based on the representation of the discrete solution, which is expa
Full Waveform Inversion Using Waveform Sensitivity Kernels
Schumacher, Florian; Friederich, Wolfgang
2013-04-01
We present a full waveform inversion concept for applications ranging from seismological to enineering contexts, in which the steps of forward simulation, computation of sensitivity kernels, and the actual inversion are kept separate of each other. We derive waveform sensitivity kernels from Born scattering theory, which for unit material perturbations are identical to the Born integrand for the considered path between source and receiver. The evaluation of such a kernel requires the calculation of Green functions and their strains for single forces at the receiver position, as well as displacement fields and strains originating at the seismic source. We compute these quantities in the frequency domain using the 3D spectral element code SPECFEM3D (Tromp, Komatitsch and Liu, 2008) and the 1D semi-analytical code GEMINI (Friederich and Dalkolmo, 1995) in both, Cartesian and spherical framework. We developed and implemented the modularized software package ASKI (Analysis of Sensitivity and Kernel Inversion) to compute waveform sensitivity kernels from wavefields generated by any of the above methods (support for more methods is planned), where some examples will be shown. As the kernels can be computed independently from any data values, this approach allows to do a sensitivity and resolution analysis first without inverting any data. In the context of active seismic experiments, this property may be used to investigate optimal acquisition geometry and expectable resolution before actually collecting any data, assuming the background model is known sufficiently well. The actual inversion step then, can be repeated at relatively low costs with different (sub)sets of data, adding different smoothing conditions. Using the sensitivity kernels, we expect the waveform inversion to have better convergence properties compared with strategies that use gradients of a misfit function. Also the propagation of the forward wavefield and the backward propagation from the receiver
Inverse of the String Theory KLT Kernel
Mizera, Sebastian
2016-01-01
The field theory Kawai-Lewellen-Tye (KLT) kernel, which relates scattering amplitudes of gravitons and gluons, turns out to be the inverse of a matrix whose components are bi-adjoint scalar partial amplitudes. In this note we propose an analogous construction for the string theory KLT kernel. We present simple diagrammatic rules for the computation of the $\\alpha'$-corrected bi-adjoint scalar amplitudes that are exact in $\\alpha'$. We find compact expressions in terms of graphs, where the standard Feynman propagators $1/p^2$ are replaced by either $1/\\sin (\\pi \\alpha' p^2)$ or $1/\\tan (\\pi \\alpha' p^2)$, which is determined by a recursive procedure.
Volatile compound formation during argan kernel roasting.
El Monfalouti, Hanae; Charrouf, Zoubida; Giordano, Manuela; Guillaume, Dominique; Kartah, Badreddine; Harhar, Hicham; Gharby, Saïd; Denhez, Clément; Zeppa, Giuseppe
2013-01-01
Virgin edible argan oil is prepared by cold-pressing argan kernels previously roasted at 110 degrees C for up to 25 minutes. The concentration of 40 volatile compounds in virgin edible argan oil was determined as a function of argan kernel roasting time. Most of the volatile compounds begin to be formed after 15 to 25 minutes of roasting. This suggests that a strictly controlled roasting time should allow the modulation of argan oil taste and thus satisfy different types of consumers. This could be of major importance considering the present booming use of edible argan oil.
Face Recognition Using Kernel Discriminant Analysis
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
Linear Discrimiant Analysis (LDA) has demonstrated their success in face recognition. But LDA is difficult to handle the high nonlinear problems, such as changes of large viewpoint and illumination in face recognition. In order to overcome these problems, we investigate Kernel Discriminant Analysis (KDA) for face recognition. This approach adopts the kernel functions to replace the dot products of nonlinear mapping in the high dimensional feature space, and then the nonlinear problem can be solved in the input space conveniently without explicit mapping. Two face databases are used to test KDA approach. The results show that our approach outperforms the conventional PCA(Eigenface) and LDA(Fisherface) approaches.
Implementation of large kernel 2-D convolution in limited FPGA resource
Zhong, Sheng; Li, Yang; Yan, Luxin; Zhang, Tianxu; Cao, Zhiguo
2007-12-01
2-D Convolution is a simple mathematical operation which is fundamental to many common image processing operators. Using FPGA to implement the convolver can greatly reduce the DSP's heavy burden in signal processing. But with the limit resource the FPGA can implement a convolver with small 2-D kernel. In this paper, An FIFO type line delayer is presented to serve as the data buffer for convolution to reduce the data fetching operation. A finite state machine is applied to control the reuse of multipliers and adders arrays. With these two techniques, a resource limited FPGA can be used to implement a larger kernel convolver which is commonly used in image process systems.
Anatomically informed convolution kernels for the projection of fMRI data on the cortical surface.
Operto, Grégory; Bulot, Rémy; Anton, Jean-Luc; Coulon, Olivier
2006-01-01
We present here a method that aims at producing representations of functional brain data on the cortical surface from functional MRI volumes. Such representations are required for subsequent cortical-based functional analysis. We propose a projection technique based on the definition, around each node of the grey/white matter interface mesh, of convolution kernels whose shape and distribution rely on the geometry of the local anatomy. For one anatomy, a set of convolution kernels is computed that can be used to project any functional data registered with this anatomy. The method is presented together with experiments on synthetic data and real statistical t-maps.
Wang, Jim Jing-Yan
2014-09-20
Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.
3D MR image denoising using rough set and kernel PCA method.
Phophalia, Ashish; Mitra, Suman K
2017-02-01
In this paper, we have presented a two stage method, using kernel principal component analysis (KPCA) and rough set theory (RST), for denoising volumetric MRI data. A rough set theory (RST) based clustering technique has been used for voxel based processing. The method groups similar voxels (3D cubes) using class and edge information derived from noisy input. Each clusters thus formed now represented via basis vector. These vectors now projected into kernel space and PCA is performed in the feature space. This work is motivated by idea that under Rician noise MRI data may be non-linear and kernel mapping will help to define linear separator between these clusters/basis vectors thus used for image denoising. We have further investigated various kernels for Rician noise for different noise levels. The best kernel is then selected on the performance basis over PSNR and structure similarity (SSIM) measures. The work has been compared with state-of-the-art methods under various measures for synthetic and real databases.
Rojas-Lima, J. E.; Domínguez-Pacheco, A.; Hernández-Aguilar, C.; Cruz-Orea, A.
2016-09-01
Considering the necessity of photothermal alternative approaches for characterizing nonhomogeneous materials like maize seeds, the objective of this research work was to analyze statistically the amplitude variations of photopyroelectric signals, by means of nonparametric techniques such as the histogram and the kernel density estimator, and the probability density function of the amplitude variations of two genotypes of maize seeds with different pigmentations and structural components: crystalline and floury. To determine if the probability density function had a known parametric form, the histogram was determined which did not present a known parametric form, so the kernel density estimator using the Gaussian kernel, with an efficiency of 95 % in density estimation, was used to obtain the probability density function. The results obtained indicated that maize seeds could be differentiated in terms of the statistical values for floury and crystalline seeds such as the mean (93.11, 159.21), variance (1.64× 103, 1.48× 103), and standard deviation (40.54, 38.47) obtained from the amplitude variations of photopyroelectric signals in the case of the histogram approach. For the case of the kernel density estimator, seeds can be differentiated in terms of kernel bandwidth or smoothing constant h of 9.85 and 6.09 for floury and crystalline seeds, respectively.
Kernel methods in orthogonalization of multi- and hypervariate data
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
HEAT KERNEL AND HARDY'S THEOREM FOR JACOBI TRANSFORM
Institute of Scientific and Technical Information of China (English)
T. KAWAZOE; LIU JIANMING(刘建明)
2003-01-01
In this paper, the authors obtain sharp upper and lower bounds for the heat kernel associatedwith Jacobi transform, and get some analogues of Hardy's Theorem for Jacobi transform byusing the sharp estimate of the heat kernel.
Gustin, Jeffery L; Jackson, Sean; Williams, Chekeria; Patel, Anokhee; Armstrong, Paul; Peter, Gary F; Settles, A Mark
2013-11-20
Maize kernel density affects milling quality of the grain. Kernel density of bulk samples can be predicted by near-infrared reflectance (NIR) spectroscopy, but no accurate method to measure individual kernel density has been reported. This study demonstrates that individual kernel density and volume are accurately measured using X-ray microcomputed tomography (μCT). Kernel density was significantly correlated with kernel volume, air space within the kernel, and protein content. Embryo density and volume did not influence overall kernel density. Partial least-squares (PLS) regression of μCT traits with single-kernel NIR spectra gave stable predictive models for kernel density (R(2) = 0.78, SEP = 0.034 g/cm(3)) and volume (R(2) = 0.86, SEP = 2.88 cm(3)). Density and volume predictions were accurate for data collected over 10 months based on kernel weights calculated from predicted density and volume (R(2) = 0.83, SEP = 24.78 mg). Kernel density was significantly correlated with bulk test weight (r = 0.80), suggesting that selection of dense kernels can translate to improved agronomic performance.
Mitigation of Noise in OFDM Based Plc System Using Filter Kernel Design
Directory of Open Access Journals (Sweden)
Nisha G Krishnan
2015-09-01
Full Text Available Power line communication is a technology that transforms power line in to pathway for conveyance of broadband data. It is cost less than other communication approach and for better bandwidth efficiency OFDM based PLC system is used. In real PLC environment some electrical appliances will produce noise. To mitigate this noise filter kernel design is used, so periodic impulsive noise and Gaussian noises are removed from PLC communication system by using this filter kernel design. MATLAB is used for the simulation and the result shows that filter kernel is simple and effective noise mitigation technique. Further in future, interference due to obstacles also wants to be mitigated for the better data transmission without noise.
Formalisation of a Separation Micro-Kernel for Common Criteria Certification
Butterfield, Andrew; Sanan, David; Hinchey, Mike
2014-08-01
The project Methods and Tools for On-Board Software Engineering (MTOBSE) 1 was a feasibility study into the ability to certify a time- space partitioning kernel aiming at Common Criteria (CC) evaluation assurance level 5+, in conformance with the Separation Kernel Protection Profile (SKPP) [1]. Here we describe the aspects of CC evaluation that involve using formal methods techniques as part of the assurance case. We describe a reference specification we wrote for a Time-Space Partitioning (TSP) operating system kernel, and how we formalised this using the Isabelle/HOL theorem proving framework. We also describe how we obtained a formal Isabelle/HOL model from C code (using XtratuM as a test case), and how this would be related to the formalised specification. We conclude with a discussion of the feasibility and likely cost of such a verification effort, and ideas for the follow-on steps for this activity.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
the kernel function which depends on the application and the model user. This research uses the most popular kernel function, the radial basis...an important role in the nation’s economy. Unfortunately, the system’s reliability is declining due to the aging components of the network [Grier...kernel function. Gaussian Bayesian kernel models became very popular recently and were extended and applied to a number of classification problems. An
Malas, Tareq M; Brown, Jed; Gunnels, John A; Keyes, David E
2012-01-01
Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution of partial differential equations, represents a challenge despite the regularity of memory access. Sophisticated optimization techniques are required to fully utilize the Central Processing Unit (CPU). We propose a new method for constructing streaming numerical kernels using a high-level assembly synthesis and optimization framework. We describe an implementation of this method in Python targeting the IBM Blue Gene/P supercomputer's PowerPC 450 core. This paper details the high-level design, construction, simulation, verification, and analysis of these kernels utilizing a subset of the CPU's instruction set. We demonstrate the effectiveness of our approach by implementing several three-dimensional stencil kernels over a variety of cached memory scenarios and analy...
An Extended Ockham Algebra with Endomorphism Kernel Property
Institute of Scientific and Technical Information of China (English)
Jie FANG
2007-01-01
An algebraic structure (∮) is said to have the endomorphism kernel property if every congruence on (∮) , other than the universal congruence, is the kernel of an endomorphism on (∮) .Inthis paper, we consider the EKP (that is, endomorphism kernel property) for an extended Ockham algebra (∮) . In particular, we describe the structure of the finite symmetric extended de Morgan algebras having EKP.
End-use quality of soft kernel durum wheat
Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...
7 CFR 981.61 - Redetermination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of...
Multiple spectral kernel learning and a gaussian complexity computation.
Reyhani, Nima
2013-07-01
Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.
A Fast and Simple Graph Kernel for RDF
de Vries, G.K.D.; de Rooij, S.
2013-01-01
In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster
7 CFR 981.60 - Determination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which...
21 CFR 176.350 - Tamarind seed kernel powder.
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in...
Heat kernel analysis for Bessel operators on symmetric cones
DEFF Research Database (Denmark)
Möllers, Jan
2014-01-01
. The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...
Stable Kernel Representations as Nonlinear Left Coprime Factorizations
Paice, A.D.B.; Schaft, A.J. van der
1994-01-01
A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel
Kernel Temporal Differences for Neural Decoding
Directory of Open Access Journals (Sweden)
Jihye Bae
2015-01-01
Full Text Available We study the feasibility and capability of the kernel temporal difference (KTD(λ algorithm for neural decoding. KTD(λ is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm’s convergence can be guaranteed for policy evaluation. The algorithm’s nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement. KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey’s neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm’s capabilities in reinforcement learning brain machine interfaces.
Bergman kernel and complex singularity exponent
Institute of Scientific and Technical Information of China (English)
LEE; HanJin
2009-01-01
We give a precise estimate of the Bergman kernel for the model domain defined by Ω F={(z,w) ∈ C n+1:Im w |F (z)| 2 > 0},where F=(f 1,...,f m) is a holomorphic map from C n to C m,in terms of the complex singularity exponent of F.
Kernel based subspace projection of hyperspectral images
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten
In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...
Analytic properties of the Virasoro modular kernel
Nemkov, Nikita
2016-01-01
On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block.
A Cubic Kernel for Feedback Vertex Set
Bodlaender, H.L.
2006-01-01
The FEEDBACK VERTEX SET problem on unweighted, undirected graphs is considered. Improving upon a result by Burrage et al. [7], we show that this problem has a kernel with O(κ3) vertices, i.e., there is a polynomial time algorithm, that given a graph G and an integer κ, finds a graph G' and integer
Analytic properties of the Virasoro modular kernel
Energy Technology Data Exchange (ETDEWEB)
Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)
2017-06-15
On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)
Hyperbolic L2-modules with Reproducing Kernels
Institute of Scientific and Technical Information of China (English)
David EELPODE; Frank SOMMEN
2006-01-01
Abstract In this paper, the Dirac operator on the Klein model for the hyperbolic space is considered. A function space containing L2-functions on the sphere Sm-1 in (R)m, which are boundary values of solutions for this operator, is defined, and it is proved that this gives rise to a Hilbert module with a reproducing kernel.
Protein Structure Prediction Using String Kernels
2006-03-03
Prediction using String Kernels 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER 5e. TASK NUMBER...consists of 4352 sequences from SCOP version 1.53 extracted from the Astral database, grouped into families and superfamilies. The dataset is processed
Bergman kernel and complex singularity exponent
Institute of Scientific and Technical Information of China (English)
CHEN BoYong; LEE HanJin
2009-01-01
We give a precise estimate of the Bergman kernel for the model domain defined by Ω_F = {(z,w) ∈ C~(n+1) : Imw - |F(z)|~2 > 0},where F = (f_1,... ,f_m) is a holomorphic map from C~n to C~m,in terms of the complex singularity exponent of F.
Symbol recognition with kernel density matching.
Zhang, Wan; Wenyin, Liu; Zhang, Kun
2006-12-01
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.
Developing Linux kernel space device driver
Institute of Scientific and Technical Information of China (English)
Zheng Wei; Wang Qinruo; Wu Naiyou
2003-01-01
This thesis introduces how to develop kernel level device drivers on Linux platform in detail. On the basis of comparing proc file system with dev file system, we choose PCI devices and USB devices as instances to introduce the method of writing device drivers for character devices by using these two file systems.
Heat Kernel Renormalization on Manifolds with Boundary
Albert, Benjamin I.
2016-01-01
In the monograph Renormalization and Effective Field Theory, Costello gave an inductive position space renormalization procedure for constructing an effective field theory that is based on heat kernel regularization of the propagator. In this paper, we extend Costello's renormalization procedure to a class of manifolds with boundary. In addition, we reorganize the presentation of the preexisting material, filling in details and strengthening the results.
Convolution kernels for multi-wavelength imaging
Boucaud, A.; Bocchio, M.; Abergel, A.; Orieux, F.; Dole, H.; Hadj-Youcef, M. A.
2016-12-01
Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as point-spread function (PSF), that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assuming Gaussian or circularised PSFs. A software to compute these kernels is available at https://github.com/aboucaud/pypher
Premise Selection for Mathematics by Corpus Analysis and Kernel Methods
Alama, Jesse; Tsivtsivadze, Evgeni; Urban, Josef; Heskes, Tom
2011-01-01
Smart premise selection is essential when using automated reasoning as a tool for large-theory formal verification, formal proof development, and experimental reverse mathematics. A strong method for premise selection in complex mathematical libraries is the application of machine learning to large corpora of proofs. This work develops learning-based premise selection in two ways. First, a newly available minimal dependency analysis of existing high-level formal mathematical proofs is used to build a large knowledge base of proof dependencies, providing new precise data for ATP-based re-verification and for training premise selection algorithms. Second, a new machine learning algorithm for premise selection based on kernel methods is proposed and implemented. To evaluate the impact of both techniques, a new benchmark consisting of 2078 large-theory mathematical problems is constructed, extending the older MPTP Challenge benchmark. The combined effect of the both techniques developed shows 40% improvement on t...
Heat kernel methods for Lifshitz theories arXiv
Barvinsky, Andrei O.; Herrero-Valea, Mario; Nesterov, Dmitry V.; Pérez-Nadal, Guillem; Steinwachs, Christian F.
We study the one-loop covariant effective action of Lifshitz theories using the heat kernel technique. The characteristic feature of Lifshitz theories is an anisotropic scaling between space and time. This is enforced by the existence of a preferred foliation of space-time, which breaks Lorentz invariance. In contrast to the relativistic case, covariant Lifshitz theories are only invariant under diffeomorphisms preserving the foliation structure. We develop a systematic method to reduce the calculation of the effective action for a generic Lifshitz operator to an algorithm acting on known results for relativistic operators. In addition, we present techniques that drastically simplify the calculation for operators with special properties. We demonstrate the efficiency of these methods by explicit applications.
A Kernel Approach to Multi-Task Learning with Task-Specific Kernels
Institute of Scientific and Technical Information of China (English)
Wei Wu; Hang Li; Yun-Hua Hu; Rong Jin
2012-01-01
Several kernel-based methods for multi-task learning have been proposed,which leverage relations among tasks as regularization to enhance the overall learning accuracies.These methods assume that the tasks share the same kernel,which could limit their applications because in practice different tasks may need different kernels.The main challenge of introducing multiple kernels into multiple tasks is that models from different reproducing kernel Hilbert spaces (RKHSs) are not comparable,making it difficult to exploit relations among tasks.This paper addresses the challenge by formalizing the problem in the square integrable space (SIS).Specially,it proposes a kernel-based method which makes use of a regularization term defined in SIS to represent task relations.We prove a new representer theorem for the proposed approach in SIS.We further derive a practical method for solving the learning problem and conduct consistency analysis of the method.We discuss the relationship between our method and an existing method.We also give an SVM (support vector machine)-based implementation of our method for multi-label classification.Experiments on an artificial example and two real-world datasets show that the proposed method performs better than the existing method.
Polarization degenerate micropillars fabricated by designing elliptical oxide apertures
Bakker, Morten P; Zhan, Alan; Coldren, Larry A; van Exter, Martin P; Bouwmeester, Dirk
2014-01-01
A method for fabrication of polarization degenerate oxide apertured micropillar cavities is demon- strated. Micropillars are etched such that the size and shape of the oxide front is controlled. The polarization splitting in the circular micropillar cavities due to the native and strain induced bire- fringence can be compensated by elongating the oxide front in the [110] direction, thereby reducing stress in this direction. By using this technique we fabricate a polarization degenerate cavity with a quality factor of 1.7*?10^4 and a mode volume of 2.7 u?m3, enabling a calculated maximum Purcell factor of 11.
Genetics Home Reference: age-related macular degeneration
... Resources (3 links) BrightFocus Foundation: Macular Degeneration Treatment Macular Degeneration Partnership: Low Vision Rehabilitation Prevent Blindness America: Age-Related Macular Degeneration (AMD) ...
Gribov ambiguity and degenerate systems
Canfora, Fabrizio; Salgado-Rebolledo, Patricio; Zanelli, Jorge
2014-01-01
The relation between Gribov ambiguity and degeneracies in the symplectic structure of physical systems is analyzed. It is shown that, in finite-dimensional systems, the presence of Gribov ambiguities in regular constrained systems (those where the constraints are functionally independent) always leads to a degenerate symplectic structure upon Dirac reduction. The implications for the Gribov-Zwanziger approach to QCD are discussed.
TORCH Computational Reference Kernels - A Testbed for Computer Science Research
Energy Technology Data Exchange (ETDEWEB)
Kaiser, Alex; Williams, Samuel Webb; Madduri, Kamesh; Ibrahim, Khaled; Bailey, David H.; Demmel, James W.; Strohmaier, Erich
2010-12-02
For decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models in order to improve application performance, efficiency, and productivity. Unfortunately, without overarching advice about future directions in these areas, individual guidance is inferred from the existing software/hardware ecosystem, and each discipline often conducts their research independently assuming all other technologies remain fixed. In today's rapidly evolving world of on-chip parallelism, isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. To combat this, we present TORCH: A Testbed for Optimization ResearCH. These computational reference kernels define the core problems of interest in scientific computing without mandating a specific language, algorithm, programming model, or implementation. To compliment the kernel (problem) definitions, we provide a set of algorithmically-expressed verification tests that can be used to verify a hardware/software co-designed solution produces an acceptable answer. Finally, to provide some illumination as to how researchers have implemented solutions to these problems in the past, we provide a set of reference implementations in C and MATLAB.
Characterization of myocardial motion patterns by unsupervised multiple kernel learning.
Sanchez-Martinez, Sergio; Duchateau, Nicolas; Erdei, Tamas; Fraser, Alan G; Bijnens, Bart H; Piella, Gemma
2017-01-01
We propose an independent objective method to characterize different patterns of functional responses to stress in the heart failure with preserved ejection fraction (HFPEF) syndrome by combining multiple temporally-aligned myocardial velocity traces at rest and during exercise, together with temporal information on the occurrence of cardiac events (valves openings/closures and atrial activation). The method builds upon multiple kernel learning, a machine learning technique that allows the combination of data of different nature and the reduction of their dimensionality towards a meaningful representation (output space). The learning process is kept unsupervised, to study the variability of the input traces without being conditioned by data labels. To enhance the physiological interpretation of the output space, the variability that it encodes is analyzed in the space of input signals after reconstructing the velocity traces via multiscale kernel regression. The methodology was applied to 2D sequences from a stress echocardiography protocol from 55 subjects (22 healthy, 19 HFPEF and 14 breathless subjects). The results confirm that characterization of the myocardial functional response to stress in the HFPEF syndrome may be improved by the joint analysis of multiple relevant features.
TOUCHING GRAIN KERNELS SEPARATION BY GAP-FILLING
Directory of Open Access Journals (Sweden)
Matthieu Faessel
2011-05-01
Full Text Available Separation of touching grain kernels is a recurring problem in image analysis. Morphological methods to separatemerged objects in binary images are generally based on the watershed transformapplied to the inverse of the distance function. This method is efficient with roughly circular objects, but cannot separate objects beyond a certain elliptic shape nor when the contact zones are too numerous or too large. This paper presents a gap-filling method applied to the skeleton of the image background as an alternative technique to go further in the fused objects separation process. Open lines resulting from skeletonization are prolonged according to their direction from corresponding end points. If the distance between two lines is smaller than a certain value, their respective end points are connected. Results of combined use of watershed and gap-filling based methods are presented on sample binary images. An example of its use on an particularly complex image containing rice grains shows that it allows to segment up to 90% of the grains when classical watershed methods allow only to segment 25% of the grains. An application to breakage and cracks assessing of parboiled rice kernels is presented.
Automatic performance tuning of parallel and accelerated seismic imaging kernels
Haberdar, Hakan
2014-01-01
With the increased complexity and diversity of mainstream high performance computing systems, significant effort is required to tune parallel applications in order to achieve the best possible performance for each particular platform. This task becomes more and more challenging and requiring a larger set of skills. Automatic performance tuning is becoming a must for optimizing applications such as Reverse Time Migration (RTM) widely used in seismic imaging for oil and gas exploration. An empirical search based auto-tuning approach is applied to the MPI communication operations of the parallel isotropic and tilted transverse isotropic kernels. The application of auto-tuning using the Abstract Data and Communication Library improved the performance of the MPI communications as well as developer productivity by providing a higher level of abstraction. Keeping productivity in mind, we opted toward pragma based programming for accelerated computation on latest accelerated architectures such as GPUs using the fairly new OpenACC standard. The same auto-tuning approach is also applied to the OpenACC accelerated seismic code for optimizing the compute intensive kernel of the Reverse Time Migration application. The application of such technique resulted in an improved performance of the original code and its ability to adapt to different execution environments.
Kernel Based Nonlinear Dimensionality Reduction and Classification for Genomic Microarray
Directory of Open Access Journals (Sweden)
Lan Shu
2008-07-01
Full Text Available Genomic microarrays are powerful research tools in bioinformatics and modern medicinal research because they enable massively-parallel assays and simultaneous monitoring of thousands of gene expression of biological samples. However, a simple microarray experiment often leads to very high-dimensional data and a huge amount of information, the vast amount of data challenges researchers into extracting the important features and reducing the high dimensionality. In this paper, a nonlinear dimensionality reduction kernel method based locally linear embedding(LLE is proposed, and fuzzy K-nearest neighbors algorithm which denoises datasets will be introduced as a replacement to the classical LLEÃ¢Â€Â™s KNN algorithm. In addition, kernel method based support vector machine (SVM will be used to classify genomic microarray data sets in this paper. We demonstrate the application of the techniques to two published DNA microarray data sets. The experimental results confirm the superiority and high success rates of the presented method.
Power Prediction in Smart Grids with Evolutionary Local Kernel Regression
Kramer, Oliver; Satzger, Benjamin; Lässig, Jörg
Electric grids are moving from a centralized single supply chain towards a decentralized bidirectional grid of suppliers and consumers in an uncertain and dynamic scenario. Soon, the growing smart meter infrastructure will allow the collection of terabytes of detailed data about the grid condition, e.g., the state of renewable electric energy producers or the power consumption of millions of private customers, in very short time steps. For reliable prediction strong and fast regression methods are necessary that are able to cope with these challenges. In this paper we introduce a novel regression technique, i.e., evolutionary local kernel regression, a kernel regression variant based on local Nadaraya-Watson estimators with independent bandwidths distributed in data space. The model is regularized with the CMA-ES, a stochastic non-convex optimization method. We experimentally analyze the load forecast behavior on real power consumption data. The proposed method is easily parallelizable, and therefore well appropriate for large-scale scenarios in smart grids.
Kernel based orthogonalization for change detection in hyperspectral images
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
Kernel versions of principal component analysis (PCA) and minimum noise fraction (MNF) analysis are applied to change detection in hyperspectral image (HyMap) data. The kernel versions are based on so-called Q-mode analysis in which the data enter into the analysis via inner products in the Gram...... the kernel function and then performing a linear analysis in that space. An example shows the successful application of (kernel PCA and) kernel MNF analysis to change detection in HyMap data covering a small agricultural area near Lake Waging-Taching, Bavaria, in Southern Germany. In the change detection...
Geodesic exponential kernels: When Curvature and Linearity Conflict
DEFF Research Database (Denmark)
Feragen, Aase; Lauze, François; Hauberg, Søren
2015-01-01
We consider kernel methods on general geodesic metric spaces and provide both negative and positive results. First we show that the common Gaussian kernel can only be generalized to a positive definite kernel on a geodesic metric space if the space is flat. As a result, for data on a Riemannian...... Laplacian kernel can be generalized while retaining positive definiteness. This implies that geodesic Laplacian kernels can be generalized to some curved spaces, including spheres and hyperbolic spaces. Our theoretical results are verified empirically....
The pre-image problem in kernel methods.
Kwok, James Tin-yau; Tsang, Ivor Wai-hung
2004-11-01
In this paper, we address the problem of finding the pre-image of a feature vector in the feature space induced by a kernel. This is of central importance in some kernel applications, such as on using kernel principal component analysis (PCA) for image denoising. Unlike the traditional method which relies on nonlinear optimization, our proposed method directly finds the location of the pre-image based on distance constraints in the feature space. It is noniterative, involves only linear algebra and does not suffer from numerical instability or local minimum problems. Evaluations on performing kernel PCA and kernel clustering on the USPS data set show much improved performance.
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Degenerate U- and V-statistics under weak dependence: Asymptotic theory and bootstrap consistency
Leucht, Anne
2012-01-01
We devise a general result on the consistency of model-based bootstrap methods for U- and V-statistics under easily verifiable conditions. For that purpose, we derive the limit distributions of degree-2 degenerate U- and V-statistics for weakly dependent $\\mathbb{R}^d$-valued random variables first. To this end, only some moment conditions and smoothness assumptions concerning the kernel are required. Based on this result, we verify that the bootstrap counterparts of these statistics have the same limit distributions. Finally, some applications to hypothesis testing are presented.
Kernel Methods for Machine Learning with Life Science Applications
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie
Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear...... models to kernel learning, and means for restoring the generalizability in both kernel Principal Component Analysis and the Support Vector Machine are proposed. Viability is proved on a wide range of benchmark machine learning data sets....... as innerproducts in the model formulation. This dissertation presents research on improving the performance of standard kernel methods like kernel Principal Component Analysis and the Support Vector Machine. Moreover, the goal of the thesis has been two-fold. The first part focuses on the use of kernel Principal...
Efficient $\\chi ^{2}$ Kernel Linearization via Random Feature Maps.
Yuan, Xiao-Tong; Wang, Zhenzhen; Deng, Jiankang; Liu, Qingshan
2016-11-01
Explicit feature mapping is an appealing way to linearize additive kernels, such as χ(2) kernel for training large-scale support vector machines (SVMs). Although accurate in approximation, feature mapping could pose computational challenges in high-dimensional settings as it expands the original features to a higher dimensional space. To handle this issue in the context of χ(2) kernel SVMs learning, we introduce a simple yet efficient method to approximately linearize χ(2) kernel through random feature maps. The main idea is to use sparse random projection to reduce the dimensionality of feature maps while preserving their approximation capability to the original kernel. We provide approximation error bound for the proposed method. Furthermore, we extend our method to χ(2) multiple kernel SVMs learning. Extensive experiments on large-scale image classification tasks confirm that the proposed approach is able to significantly speed up the training process of the χ(2) kernel SVMs at almost no cost of testing accuracy.
Multiple Kernel Learning in Fisher Discriminant Analysis for Face Recognition
Directory of Open Access Journals (Sweden)
Xiao-Zhang Liu
2013-02-01
Full Text Available Recent applications and developments based on support vector machines (SVMs have shown that using multiple kernels instead of a single one can enhance classifier performance. However, there are few reports on performance of the kernel‐based Fisher discriminant analysis (kernel‐based FDA method with multiple kernels. This paper proposes a multiple kernel construction method for kernel‐based FDA. The constructed kernel is a linear combination of several base kernels with a constraint on their weights. By maximizing the margin maximization criterion (MMC, we present an iterative scheme for weight optimization. The experiments on the FERET and CMU PIE face databases show that, our multiple kernel Fisher discriminant analysis (MKFD achieves high recognition performance, compared with single‐kernel‐based FDA. The experiments also show that the constructed kernel relaxes parameter selection for kernel‐based FDA to some extent.
Flame kernel generation and propagation in turbulent partially premixed hydrocarbon jet
Mansour, Mohy S.
2014-04-23
Flame development, propagation, stability, combustion efficiency, pollution formation, and overall system efficiency are affected by the early stage of flame generation defined as flame kernel. Studying the effects of turbulence and chemistry on the flame kernel propagation is the main aim of this work for natural gas (NG) and liquid petroleum gas (LPG). In addition the minimum ignition laser energy (MILE) has been investigated for both fuels. Moreover, the flame stability maps for both fuels are also investigated and analyzed. The flame kernels are generated using Nd:YAG pulsed laser and propagated in a partially premixed turbulent jet. The flow field is measured using 2-D PIV technique. Five cases have been selected for each fuel covering different values of Reynolds number within a range of 6100-14400, at a mean equivalence ratio of 2 and a certain level of partial premixing. The MILE increases by increasing the equivalence ratio. Near stoichiometric the energy density is independent on the jet velocity while in rich conditions it increases by increasing the jet velocity. The stability curves show four distinct regions as lifted, attached, blowout, and a fourth region either an attached flame if ignition occurs near the nozzle or lifted if ignition occurs downstream. LPG flames are more stable than NG flames. This is consistent with the higher values of the laminar flame speed of LPG. The flame kernel propagation speed is affected by both turbulence and chemistry. However, at low turbulence level chemistry effects are more pronounced while at high turbulence level the turbulence becomes dominant. LPG flame kernels propagate faster than those for NG flame. In addition, flame kernel extinguished faster in LPG fuel as compared to NG fuel. The propagation speed is likely to be consistent with the local mean equivalence ratio and its corresponding laminar flame speed. Copyright © Taylor & Francis Group, LLC.
The Effect of Oak Kernel on Digestibility and Fermentative Characteristics in Arabian Sheep
Directory of Open Access Journals (Sweden)
M Harsini
2013-11-01
Full Text Available The objective of this experiment was to study the effect of oak kernel on fermentative and microbial characteristics in Arabian sheep. Sixteen sheep were used (average weight of 45±3 kg in a completely randomized design. Treatments were four levels of oak kernel (0, 21, 42 and 63% DM. Animals were fed experimental diets for 28 days. Faecal with the ort feed of eight sheep collected for measured apparent digestibility during the last 5 days in the experiment. Rumen fluid obtained from all animals was used for gas production technique and measured fermentation parameters. Results showed that digestibility of dry matter of diets (respectively 63/55, 70/70, 71/73 and 75/80 increased linearly with increasing levels of oak. The rumen pH (respectively 6.29, 6/23, 6/17 and 5/90 and concentration of ammonia nitrogen (respectively 15/66, 13/75, 13/58 and 13/11 significantly reduced by increasing the level oak in the diet. Potential gas production, digestibility of ADF and NDF, digestible organic matter using gas production techniques were not affected by the experimental rations. So the No negative effect of oak kernel tannin. Oak kernel can be used as a source of carbohydrate and energy in sheep rations.
Komatitsch, Dimitri; Bozdag, Ebru; de Andrade, Elliott Sales; Peter, Daniel B; Liu, Qinya; Tromp, Jeroen
2016-01-01
We introduce a technique to compute exact anelastic sensitivity kernels in the time domain using parsimonious disk storage. The method is based on a reordering of the time loop of time-domain forward/adjoint wave propagation solvers combined with the use of a memory buffer. It avoids instabilities that occur when time-reversing dissipative wave propagation simulations. The total number of required time steps is unchanged compared to usual acoustic or elastic approaches. The cost is reduced by a factor of 4/3 compared to the case in which anelasticity is partially accounted for by accommodating the effects of physical dispersion. We validate our technique by performing a test in which we compare the $K_\\alpha$ sensitivity kernel to the exact kernel obtained by saving the entire forward calculation. This benchmark confirms that our approach is also exact. We illustrate the importance of including full attenuation in the calculation of sensitivity kernels by showing significant differences with physical-dispersi...
A Novel Framework for Learning Geometry-Aware Kernels.
Pan, Binbin; Chen, Wen-Sheng; Xu, Chen; Chen, Bo
2016-05-01
The data from real world usually have nonlinear geometric structure, which are often assumed to lie on or close to a low-dimensional manifold in a high-dimensional space. How to detect this nonlinear geometric structure of the data is important for the learning algorithms. Recently, there has been a surge of interest in utilizing kernels to exploit the manifold structure of the data. Such kernels are called geometry-aware kernels and are widely used in the machine learning algorithms. The performance of these algorithms critically relies on the choice of the geometry-aware kernels. Intuitively, a good geometry-aware kernel should utilize additional information other than the geometric information. In many applications, it is required to compute the out-of-sample data directly. However, most of the geometry-aware kernel methods are restricted to the available data given beforehand, with no straightforward extension for out-of-sample data. In this paper, we propose a framework for more general geometry-aware kernel learning. The proposed framework integrates multiple sources of information and enables us to develop flexible and effective kernel matrices. Then, we theoretically show how the learned kernel matrices are extended to the corresponding kernel functions, in which the out-of-sample data can be computed directly. Under our framework, a novel family of geometry-aware kernels is developed. Especially, some existing geometry-aware kernels can be viewed as instances of our framework. The performance of the kernels is evaluated on dimensionality reduction, classification, and clustering tasks. The empirical results show that our kernels significantly improve the performance.
Learning Potential Energy Landscapes using Graph Kernels
Ferré, G; Barros, K
2016-01-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab-initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. We show on a standard benchmark that our Graph Approximated Energy (GRAPE) method is competitive with state of the art kernel m...
Viability Kernel for Ecosystem Management Models
Anaya, Eladio Ocana; Oliveros--Ramos, Ricardo; Tam, Jorge
2009-01-01
We consider sustainable management issues formulated within the framework of control theory. The problem is one of controlling a discrete--time dynamical system (e.g. population model) in the presence of state and control constraints, representing conflicting economic and ecological issues for instance. The viability kernel is known to play a basic role for the analysis of such problems and the design of viable control feedbacks, but its computation is not an easy task in general. We study the viability of nonlinear generic ecosystem models under preservation and production constraints. Under simple conditions on the growth rates at the boundary constraints, we provide an explicit description of the viability kernel. A numerical illustration is given for the hake--anchovy couple in the Peruvian upwelling ecosystem.
Quark-hadron duality: pinched kernel approch
Dominguez, C A; Schilcher, K; Spiesberger, H
2016-01-01
Hadronic spectral functions measured by the ALEPH collaboration in the vector and axial-vector channels are used to study potential quark-hadron duality violations (DV). This is done entirely in the framework of pinched kernel finite energy sum rules (FESR), i.e. in a model independent fashion. The kinematical range of the ALEPH data is effectively extended up to $s = 10\\; {\\mbox{GeV}^2}$ by using an appropriate kernel, and assuming that in this region the spectral functions are given by perturbative QCD. Support for this assumption is obtained by using $e^+ e^-$ annihilation data in the vector channel. Results in both channels show a good saturation of the pinched FESR, without further need of explicit models of DV.
Jin, Zhonghai; Wielicki, Bruce A.; Loukachine, Constantin; Charlock, Thomas P.; Young, David; Noeel, Stefan
2011-01-01
The radiative kernel approach provides a simple way to separate the radiative response to different climate parameters and to decompose the feedback into radiative and climate response components. Using CERES/MODIS/Geostationary data, we calculated and analyzed the solar spectral reflectance kernels for various climate parameters on zonal, regional, and global spatial scales. The kernel linearity is tested. Errors in the kernel due to nonlinearity can vary strongly depending on climate parameter, wavelength, surface, and solar elevation; they are large in some absorption bands for some parameters but are negligible in most conditions. The spectral kernels are used to calculate the radiative responses to different climate parameter changes in different latitudes. The results show that the radiative response in high latitudes is sensitive to the coverage of snow and sea ice. The radiative response in low latitudes is contributed mainly by cloud property changes, especially cloud fraction and optical depth. The large cloud height effect is confined to absorption bands, while the cloud particle size effect is found mainly in the near infrared. The kernel approach, which is based on calculations using CERES retrievals, is then tested by direct comparison with spectral measurements from Scanning Imaging Absorption Spectrometer for Atmospheric Cartography (SCIAMACHY) (a different instrument on a different spacecraft). The monthly mean interannual variability of spectral reflectance based on the kernel technique is consistent with satellite observations over the ocean, but not over land, where both model and data have large uncertainty. RMS errors in kernel ]derived monthly global mean reflectance over the ocean compared to observations are about 0.001, and the sampling error is likely a major component.
Searching and Indexing Genomic Databases via Kernelization
Directory of Open Access Journals (Sweden)
Travis eGagie
2015-02-01
Full Text Available The rapid advance of DNA sequencing technologies has yielded databases of thousands of genomes. To search and index these databases effectively, it is important that we take advantage of the similarity between those genomes. Several authors have recently suggested searching or indexing only one reference genome and the parts of the other genomes where they differ. In this paper we survey the twenty-year history of this idea and discuss its relation to kernelization in parameterized complexity.
Kernel based subspace projection of hyperspectral images
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten
In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF). Th......). The MAF projection exploits the fact that interesting phenomena in images typically exhibit spatial autocorrelation. The analysis is based on nearinfrared hyperspectral images of maize grains demonstrating the superiority of the kernelbased MAF method....
Wheat kernel dimensions: how do they contribute to kernel weight at an individual QTL level?
Indian Academy of Sciences (India)
Fa Cui; Anming Ding; Jun Li; Chunhua Zhao; Xingfeng Li; Deshun Feng; Xiuqin Wang; Lin Wang; Jurong Gao; Honggang Wang
2011-12-01
Kernel dimensions (KD) contribute greatly to thousand-kernel weight (TKW) in wheat. In the present study, quantitative trait loci (QTL) for TKW, kernel length (KL), kernel width (KW) and kernel diameter ratio (KDR) were detected by both conditional and unconditional QTL mapping methods. Two related F8:9 recombinant inbred line (RIL) populations, comprising 485 and 229 lines, respectively, were used in this study, and the trait phenotypes were evaluated in four environments. Unconditional QTL mapping analysis detected 77 additive QTL for four traits in two populations. Of these, 24 QTL were verified in at least three trials, and five of them were major QTL, thus being of great value for marker assisted selection in breeding programmes. Conditional QTL mapping analysis, compared with unconditional QTL mapping analysis, resulted in reduction in the number of QTL for TKW due to the elimination of TKW variations caused by its conditional traits; based on which we first dissected genetic control system involved in the synthetic process between TKW and KD at an individual QTL level. Results indicated that, at the QTL level, KW had the strongest influence on TKW, followed by KL, and KDR had the lowest level contribution to TKW. In addition, the present study proved that it is not all-inclusive to determine genetic relationships of a pairwise QTL for two related/causal traits based on whether they were co-located. Thus, conditional QTL mapping method should be used to evaluate possible genetic relationships of two related/causal traits.
Absolute Orientation Based on Distance Kernel Functions
Directory of Open Access Journals (Sweden)
Yanbiao Sun
2016-03-01
Full Text Available The classical absolute orientation method is capable of transforming tie points (TPs from a local coordinate system to a global (geodetic coordinate system. The method is based only on a unique set of similarity transformation parameters estimated by minimizing the total difference between all ground control points (GCPs and the fitted points. Nevertheless, it often yields a transformation with poor accuracy, especially in large-scale study cases. To address this problem, this study proposes a novel absolute orientation method based on distance kernel functions, in which various sets of similarity transformation parameters instead of only one set are calculated. When estimating the similarity transformation parameters for TPs using the iterative solution of a non-linear least squares problem, we assigned larger weighting matrices for the GCPs for which the distances from the point are short. The weighting matrices can be evaluated using the distance kernel function as a function of the distances between the GCPs and the TPs. Furthermore, we used the exponential function and the Gaussian function to describe distance kernel functions in this study. To validate and verify the proposed method, six synthetic and two real datasets were tested. The accuracy was significantly improved by the proposed method when compared to the classical method, although a higher computational complexity is experienced.
Physicochemical Properties of Palm Kernel Oil
Directory of Open Access Journals (Sweden)
Amira P. Olaniyi
2014-09-01
Full Text Available Physicochemical analyses were carried out on palm kernel oil (Adin and the following results were obtained: Saponification value; 280.5±56.1 mgKOH/g, acid value; 2.7±0.3 mg KOH/g, Free Fatty Acid (FFA; 1.35±0.15 KOH/g, ester value; 277.8±56.4 mgKOH/g, peroxide value; 14.3±0.8 mEq/kg; iodine value; 15.86±4.02 mgKOH/g, Specific Gravity (S.G value; 0.904, refractive index; 1.412 and inorganic materials; 1.05%. Its odour and colour were heavy burnt smell and burnt brown, respectively. These values were compared with those obtained for groundnut and coconut oils. It was found that the physico-chemical properties of palm kernel oil are comparable to those of groundnut and coconut oils except for the peroxide value (i.e., 14.3±0.8 mEq which was not detectable in groundnut and coconut oils. Also the odour of both groundnut and coconut oils were pleasant while that of the palm kernel oil was not as pleasant (i.e., heavy burnt smell.
Convolution kernels for multi-wavelength imaging
Boucaud, Alexandre; Abergel, Alain; Orieux, François; Dole, Hervé; Hadj-Youcef, Mohamed Amine
2016-01-01
Astrophysical images issued from different instruments and/or spectral bands often require to be processed together, either for fitting or comparison purposes. However each image is affected by an instrumental response, also known as PSF, that depends on the characteristics of the instrument as well as the wavelength and the observing strategy. Given the knowledge of the PSF in each band, a straightforward way of processing images is to homogenise them all to a target PSF using convolution kernels, so that they appear as if they had been acquired by the same instrument. We propose an algorithm that generates such PSF-matching kernels, based on Wiener filtering with a tunable regularisation parameter. This method ensures all anisotropic features in the PSFs to be taken into account. We compare our method to existing procedures using measured Herschel/PACS and SPIRE PSFs and simulated JWST/MIRI PSFs. Significant gains up to two orders of magnitude are obtained with respect to the use of kernels computed assumin...
A Fast Reduced Kernel Extreme Learning Machine.
Deng, Wan-Yu; Ong, Yew-Soon; Zheng, Qing-Hua
2016-04-01
In this paper, we present a fast and accurate kernel-based supervised algorithm referred to as the Reduced Kernel Extreme Learning Machine (RKELM). In contrast to the work on Support Vector Machine (SVM) or Least Square SVM (LS-SVM), which identifies the support vectors or weight vectors iteratively, the proposed RKELM randomly selects a subset of the available data samples as support vectors (or mapping samples). By avoiding the iterative steps of SVM, significant cost savings in the training process can be readily attained, especially on Big datasets. RKELM is established based on the rigorous proof of universal learning involving reduced kernel-based SLFN. In particular, we prove that RKELM can approximate any nonlinear functions accurately under the condition of support vectors sufficiency. Experimental results on a wide variety of real world small instance size and large instance size applications in the context of binary classification, multi-class problem and regression are then reported to show that RKELM can perform at competitive level of generalized performance as the SVM/LS-SVM at only a fraction of the computational effort incurred.
Kernel methods for phenotyping complex plant architecture.
Kawamura, Koji; Hibrand-Saint Oyant, Laurence; Foucher, Fabrice; Thouroude, Tatiana; Loustau, Sébastien
2014-02-07
The Quantitative Trait Loci (QTL) mapping of plant architecture is a critical step for understanding the genetic determinism of plant architecture. Previous studies adopted simple measurements, such as plant-height, stem-diameter and branching-intensity for QTL mapping of plant architecture. Many of these quantitative traits were generally correlated to each other, which give rise to statistical problem in the detection of QTL. We aim to test the applicability of kernel methods to phenotyping inflorescence architecture and its QTL mapping. We first test Kernel Principal Component Analysis (KPCA) and Support Vector Machines (SVM) over an artificial dataset of simulated inflorescences with different types of flower distribution, which is coded as a sequence of flower-number per node along a shoot. The ability of discriminating the different inflorescence types by SVM and KPCA is illustrated. We then apply the KPCA representation to the real dataset of rose inflorescence shoots (n=1460) obtained from a 98 F1 hybrid mapping population. We find kernel principal components with high heritability (>0.7), and the QTL analysis identifies a new QTL, which was not detected by a trait-by-trait analysis of simple architectural measurements. The main tools developed in this paper could be use to tackle the general problem of QTL mapping of complex (sequences, 3D structure, graphs) phenotypic traits.
Spheroidal Degeneration of the Cornea
Directory of Open Access Journals (Sweden)
Erdem Dinç
2011-08-01
Full Text Available A thirty-one-year-old male patient presented with bilateral epiphora and stinging sensation in the cornea. Detailed history revealed that a bilateral corneal scraping had been made regarding the initial diagnosis of fungal keratitis. His bestcorrected visual acuities were 20/20 and 20/30 in right and left eyes, respectively. Biomicroscopy showed bilateral amber colored spherules in the anterior stroma of the central cornea. The diagnosis of spheroidal corneal degeneration was established and symptomatic therapy with artificial tear drops was prescribed. Ultraviolet light is widely accepted to be the main etiological factor in the pathogenesis of spheroidal degeneration. Because of difficulties in the early stages of the diagnostic process of the disease, incorrect diagnoses can be made with inappropriate interventions. (Turk J Ophthalmol 2011; 41: 264-6
Laguerre Kernels –Based SVM for Image Classification
Directory of Open Access Journals (Sweden)
Ashraf Afifi
2014-01-01
Full Text Available Support vector machines (SVMs have been promising methods for classification and regression analysis because of their solid mathematical foundations which convey several salient properties that other methods hardly provide. However the performance of SVMs is very sensitive to how the kernel function is selected, the challenge is to choose the kernel function for accurate data classification. In this paper, we introduce a set of new kernel functions derived from the generalized Laguerre polynomials. The proposed kernels could improve the classification accuracy of SVMs for both linear and nonlinear data sets. The proposed kernel functions satisfy Mercer’s condition and orthogonally properties which are important and useful in some applications when the support vector number is needed as in feature selection. The performance of the generalized Laguerre kernels is evaluated in comparison with the existing kernels. It was found that the choice of the kernel function, and the values of the parameters for that kernel are critical for a given amount of data. The proposed kernels give good classification accuracy in nearly all the data sets, especially those of high dimensions.
Identification of Fusarium damaged wheat kernels using image analysis
Directory of Open Access Journals (Sweden)
Ondřej Jirsa
2011-01-01
Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
Image quality of mixed convolution kernel in thoracic computed tomography.
Neubauer, Jakob; Spira, Eva Maria; Strube, Juliane; Langer, Mathias; Voss, Christian; Kotter, Elmar
2016-11-01
The mixed convolution kernel alters his properties geographically according to the depicted organ structure, especially for the lung. Therefore, we compared the image quality of the mixed convolution kernel to standard soft and hard kernel reconstructions for different organ structures in thoracic computed tomography (CT) images.Our Ethics Committee approved this prospective study. In total, 31 patients who underwent contrast-enhanced thoracic CT studies were included after informed consent. Axial reconstructions were performed with hard, soft, and mixed convolution kernel. Three independent and blinded observers rated the image quality according to the European Guidelines for Quality Criteria of Thoracic CT for 13 organ structures. The observers rated the depiction of the structures in all reconstructions on a 5-point Likert scale. Statistical analysis was performed with the Friedman Test and post hoc analysis with the Wilcoxon rank-sum test.Compared to the soft convolution kernel, the mixed convolution kernel was rated with a higher image quality for lung parenchyma, segmental bronchi, and the border between the pleura and the thoracic wall (P kernel, the mixed convolution kernel was rated with a higher image quality for aorta, anterior mediastinal structures, paratracheal soft tissue, hilar lymph nodes, esophagus, pleuromediastinal border, large and medium sized pulmonary vessels and abdomen (P kernel cannot fully substitute the standard CT reconstructions. Hard and soft convolution kernel reconstructions still seem to be mandatory for thoracic CT.
OBLIQUE PROJECTION REALIZATION OF A KERNEL-BASED NONLINEAR DISCRIMINATOR
Institute of Scientific and Technical Information of China (English)
Liu Benyong; Zhang Jing
2006-01-01
Previously, a novel classifier called Kernel-based Nonlinear Discriminator (KND) was proposed to discriminate a pattern class from other classes by minimizing mean effect of the latter. To consider the effect of the target class, this paper introduces an oblique projection algorithm to determine the coefficients of a KND so that it is extended to a new version called extended KND (eKND). In eKND construction, the desired output vector of the target class is obliquely projected onto the relevant subspace along the subspace related to other classes. In addition, a simple technique is proposed to calculate the associated oblique projection operator. Experimental results on handwritten digit recognition show that the algorithm performes better than a KND classifier and some other commonly used classifiers.
Radial keratotomy associated endothelial degeneration
Directory of Open Access Journals (Sweden)
Moshirfar M
2012-02-01
Full Text Available Majid Moshirfar, Andrew Ollerton, Rodmehr T Semnani, Maylon HsuJohn A Moran Eye Center, Department of Ophthalmology and Visual Sciences, University of Utah, Salt Lake City, UT, USAPurpose: To describe the presentation and clinical course of eyes with a history of radial keratotomy (RK and varying degrees of endothelial degeneration.Methods: Retrospective case series were used.Results: Thirteen eyes (seven patients were identified with clinical findings of significant guttata and a prior history of RK. The mean age of presentation for cornea evaluation was 54.3 years (range: 38–72 years, averaging 18.7 years (range: 11–33 years after RK. The presentation of guttata varied in degree from moderate to severe. Best corrected visual acuity (BCVA ranged from 20/25 to 20/80. All patients had a history of bilateral RK, except one patient who did not develop any guttata in the eye without prior RK. No patients reported a family history of Fuch’s Dystrophy. One patient underwent a penetrating keratoplasty in one eye and a Descemet’s stripping automated endothelial keratoplasty (DSAEK in the other eye.Conclusions: RK may induce a spectrum of endothelial degeneration. In elderly patients, the findings of guttata may signify comorbid Fuch’s dystrophy in which RK incisions could potentially hasten endothelial decomposition. In these select patients with stable cornea topography and prior RK, DSAEK may successfully treat RK endothelial degeneration.Keywords: radial keratotomy, RK, Descemet’s stripping automated endothelial keratoplasty, DSAEK, guttata, endothelial degeneration, Fuch’s dystrophy
Degenerating the elliptic Schlesinger system
Aminov, G. A.; Artamonov, S. B.
2013-01-01
We study various ways of degenerating the Schlesinger system on the elliptic curve with R marked points. We construct a limit procedure based on an infinite shift of the elliptic curve parameter and on shifts of the marked points. We show that using this procedure allows obtaining a nonautonomous Hamiltonian system describing the Toda chain with additional spin sl(N, ℂ) degrees of freedom.
Albarrak, Abdulrahman; Coenen, Frans; Zheng, Yalin
2017-01-01
Three-dimensional (3D) (volumetric) diagnostic imaging techniques are indispensable with respect to the diagnosis and management of many medical conditions. However there is a lack of automated diagnosis techniques to facilitate such 3D image analysis (although some support tools do exist). This paper proposes a novel framework for volumetric medical image classification founded on homogeneous decomposition and dictionary learning. In the proposed framework each image (volume) is recursively decomposed until homogeneous regions are arrived at. Each region is represented using a Histogram of Oriented Gradients (HOG) which is transformed into a set of feature vectors. The Gaussian Mixture Model (GMM) is then used to generate a "dictionary" and the Improved Fisher Kernel (IFK) approach is used to encode feature vectors so as to generate a single feature vector for each volume, which can then be fed into a classifier generator. The principal advantage offered by the framework is that it does not require the detection (segmentation) of specific objects within the input data. The nature of the framework is fully described. A wide range of experiments was conducted with which to analyse the operation of the proposed framework and these are also reported fully in the paper. Although the proposed approach is generally applicable to 3D volumetric images, the focus for the work is 3D retinal Optical Coherence Tomography (OCT) images in the context of the diagnosis of Age-related Macular Degeneration (AMD). The results indicate that excellent diagnostic predictions can be produced using the proposed framework.
Radial keratotomy associated endothelial degeneration.
Moshirfar, Majid; Ollerton, Andrew; Semnani, Rodmehr T; Hsu, Maylon
2012-01-01
To describe the presentation and clinical course of eyes with a history of radial keratotomy (RK) and varying degrees of endothelial degeneration. Retrospective case series were used. Thirteen eyes (seven patients) were identified with clinical findings of significant guttata and a prior history of RK. The mean age of presentation for cornea evaluation was 54.3 years (range: 38-72 years), averaging 18.7 years (range: 11-33 years) after RK. The presentation of guttata varied in degree from moderate to severe. Best corrected visual acuity (BCVA) ranged from 20/25 to 20/80. All patients had a history of bilateral RK, except one patient who did not develop any guttata in the eye without prior RK. No patients reported a family history of Fuch's Dystrophy. One patient underwent a penetrating keratoplasty in one eye and a Descemet's stripping automated endothelial keratoplasty (DSAEK) in the other eye. RK may induce a spectrum of endothelial degeneration. In elderly patients, the findings of guttata may signify comorbid Fuch's dystrophy in which RK incisions could potentially hasten endothelial decomposition. In these select patients with stable cornea topography and prior RK, DSAEK may successfully treat RK endothelial degeneration.
Conjunctival intraepithelial neoplasia with corneal furrow degeneration
Directory of Open Access Journals (Sweden)
Pukhraj Rishi
2014-01-01
Full Text Available A 68-year-old man presented with redness of left eye since six months. Examination revealed bilateral corneal furrow degeneration. Left eye lesion was suggestive of conjunctival squamous cell carcinoma, encroaching on to cornea. Anterior segment optical coherence tomography (AS-OCT confirmed peripheral corneal thinning. Fluorescein angiography confirmed intrinsic vascularity of lesion. Patient was managed with "no touch" surgical excision, dry keratectomy without alcohol, cryotherapy, and primary closure. Pathologic examination of removed tissue confirmed clinical diagnosis. Management of this particular case required modification of standard treatment protocol. Unlike the alcohol-assisted technique of tumor dissection described, ethyl alcohol was not used for risk of corneal perforation due to underlying peripheral corneal thinning. Likewise, topical steroids were withheld in the post-operative period. Three weeks post-operatively, left eye was healing well. Hence, per-operative usage of absolute alcohol and post-operative use of topical steroids may be best avoided in such eyes.
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
A Testbed of Parallel Kernels for Computer Science Research
Energy Technology Data Exchange (ETDEWEB)
Bailey, David; Demmel, James; Ibrahim, Khaled; Kaiser, Alex; Koniges, Alice; Madduri, Kamesh; Shalf, John; Strohmaier, Erich; Williams, Samuel
2010-04-30
For several decades, computer scientists have sought guidance on how to evolve architectures, languages, and programming models for optimal performance, efficiency, and productivity. Unfortunately, this guidance is most often taken from the existing software/hardware ecosystem. Architects attempt to provide micro-architectural solutions to improve performance on fixed binaries. Researchers tweak compilers to improve code generation for existing architectures and implementations, and they may invent new programming models for fixed processor and memory architectures and computational algorithms. In today's rapidly evolving world of on-chip parallelism, these isolated and iterative improvements to performance may miss superior solutions in the same way gradient descent optimization techniques may get stuck in local minima. In an initial study, we have developed an alternate approach that, rather than starting with an existing hardware/software solution laced with hidden assumptions, defines the computational problems of interest and invites architects, researchers and programmers to implement novel hardware/ software co-designed solutions. Our work builds on the previous ideas of computational dwarfs, motifs, and parallel patterns by selecting a representative set of essential problems for which we provide: An algorithmic description; scalable problem definition; illustrative reference implementations; verification schemes. For simplicity, we focus initially on the computational problems of interest to the scientific computing community but proclaim the methodology (and perhaps a subset of the problems) as applicable to other communities. We intend to broaden the coverage of this problem space through stronger community involvement. Previous work has established a broad categorization of numerical methods of interest to the scientific computing, in the spirit of the NAS Benchmarks, which pioneered the basic idea of a 'pencil and paper benchmark' in the
DEFF Research Database (Denmark)
Varneskov, Rasmus T.
2014-01-01
This paper analyzes a generalized class of flat-top realized kernels for estimation ot the quadratic variation spectrum,i.e. the decomposition of quadratic variation into integrated variance and jump variation, when the underlying, efficient price process is contaminated by addictive noise....... The additive noise consists of two orthogonal components, which allows for a-mixing dependent exogenous noise and an asymptoticaly non-degenerate endogenous correlation structure, respectively. Both components may exhibit polynomially decaying autocovariances. In the absence of jumps, the class of flat......-top estimators are shown to be consistent, asymptotically unbiased, and mixed Gaussian at the optimal rate of convergence, n1/4. Exact bound on lower order terms are obtained using maximal inequalities and these are used to derive a conservative, MSE-optimal flat-top shrinkage. Additionally, bounds...
A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs
Directory of Open Access Journals (Sweden)
Chunhui Zhao
2017-02-01
Full Text Available The kernel RX (KRX detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX detector and its parallel implementation on graphics processing units (GPUs. The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.
Institute of Scientific and Technical Information of China (English)
XIE Shi-Peng; LUO Li-Min
2012-01-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT).The scatter kernel superposition (SKS) method has been used occasionally in previous studies.However,this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel.This study first evaluates the scatter kernel parameters using the SDB,and then isolates the scatter distribution based on the SKS.The quality of image can be improved by removing the scatter distribution.The results show that the method can effectively reduce the scatter artifacts,and increase the image quality.Our approach increases the image contrast and reduces the magnitude of cupping.The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel.This method is computationally efficient,easy to implement,and provides scatter correction using a single scan acquisition.
A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs.
Zhao, Chunhui; Li, Jiawei; Meng, Meiling; Yao, Xifeng
2017-02-23
The kernel RX (KRX) detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX) detector and its parallel implementation on graphics processing units (GPUs). The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Directory of Open Access Journals (Sweden)
Bo Liu
2012-02-01
Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.
A substitute for the singular Green kernel in the Newtonian potential of celestial bodies
Huré, J.-M.; Dieckmann, A.
2012-05-01
The "point mass singularity" inherent in Newton's law for gravitation represents a major difficulty in accurately determining the potential and forces inside continuous bodies. Here we report a simple and efficient analytical method to bypass the singular Green kernel 1/|r - r'| inside the source without altering the nature of the interaction. We build an equivalent kernel made up of a "cool kernel", which is fully regular (and contains the long-range - GM/r asymptotic behavior), and the gradient of a "hyperkernel", which is also regular. Compared to the initial kernel, these two components are easily integrated over the source volume using standard numerical techniques. The demonstration is presented for three-dimensional distributions in cylindrical coordinates, which are well-suited to describing rotating bodies (stars, discs, asteroids, etc.) as commonly found in the Universe. An example of implementation is given. The case of axial symmetry is treated in detail, and the accuracy is checked by considering an exact potential/surface density pair corresponding to a flat circular disc. This framework provides new tools to keep or even improve the physical realism of models and simulations of self-gravitating systems, and represents, for some of them, a conclusive alternative to softened gravity.
Intelligent control of a sensor-actuator system via kernelized least-squares policy iteration.
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.
Sensitivity kernels for time-distance inversion based on the Rytov approximation
Jensen, J. M.; Pijpers, F. P.
2003-12-01
The study of the solar convection zone has been given a new impetus with what promises to be a very powerful new tool in time-distance helioseismology (Duvall et al. \\cite{DJHP93}). Using data obtained with this technique, it has become possible to do a tomographic reconstruction of structures in the convection zone. Most of the work done on inversion of these data has relied on the ray approximation to deliver sensitivity kernels for the inversion procedures (Kosovichev \\cite{Kos96}). Inversions using non ray-theoretical kernels were shown by Jensen et al. (\\cite{Jen01}). In this paper we go beyond the ray approximation and derive sensitivity kernels based on the Rytov approximation and Green's function theory. We derive an expression for the sensitivity kernel using ray-based Green's functions and show how further approximations can be done to increase the computational efficiency. Appendices A-D are only available in electronic form at http://www.edpsciences.org
FRACTIONAL INTEGRALS WITH VARIABLE KERNELS ON HARDY SPACES
Institute of Scientific and Technical Information of China (English)
ZhangPu; DingYong
2003-01-01
The fractional integral operators with variable kernels are discussed.It is proved that if the kernel satisfies the Dini-condition,then the fractional integral operators with variable kernels are bounded from Hp(Rn) into Lq(Rn) when 0
Approximating and learning by Lipschitz kernel on the sphere
Institute of Scientific and Technical Information of China (English)
CAO Fei-long; WANG Chang-miao
2014-01-01
This paper investigates some approximation properties and learning rates of Lips-chitz kernel on the sphere. A perfect convergence rate on the shifts of Lipschitz kernel on the sphere, which is faster than O(n-1/2), is obtained, where n is the number of parameters needed in the approximation. By means of the approximation, a learning rate of regularized least square algorithm with the Lipschitz kernel on the sphere is also deduced.
Kernel based visual tracking with scale invariant features
Institute of Scientific and Technical Information of China (English)
Risheng Han; Zhongliang Jing; Yuanxiang Li
2008-01-01
The kernel based tracking has two disadvantages:the tracking window size cannot be adjusted efficiently,and the kernel based color distribution may not have enough ability to discriminate object from clutter background.FDr boosting up the feature's discriminating ability,both scale invariant features and kernel based color distribution features are used as descriptors of tracked object.The proposed algorithm can keep tracking object of varying scales even when the surrounding background is similar to the object's appearance.
Classification of maize kernels using NIR hyperspectral imaging
DEFF Research Database (Denmark)
Williams, Paul; Kucheryavskiy, Sergey V.
2016-01-01
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....
Blind Identification of SIMO Wiener Systems Based on Kernel Canonical Correlation Analysis
Van Vaerenbergh, Steven; Via, Javier; Santamaria, Ignacio
2013-05-01
We consider the problem of blind identification and equalization of single-input multiple-output (SIMO) nonlinear channels. Specifically, the nonlinear model consists of multiple single-channel Wiener systems that are excited by a common input signal. The proposed approach is based on a well-known blind identification technique for linear SIMO systems. By transforming the output signals into a reproducing kernel Hilbert space (RKHS), a linear identification problem is obtained, which we propose to solve through an iterative procedure that alternates between canonical correlation analysis (CCA) to estimate the linear parts, and kernel canonical correlation (KCCA) to estimate the memoryless nonlinearities. The proposed algorithm is able to operate on systems with as few as two output channels, on relatively small data sets and on colored signals. Simulations are included to demonstrate the effectiveness of the proposed technique.
Parameter optimization in the regularized kernel minimum noise fraction transformation
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2012-01-01
Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....
Reproducing wavelet kernel method in nonlinear system identification
Institute of Scientific and Technical Information of China (English)
WEN Xiang-jun; XU Xiao-ming; CAI Yun-ze
2008-01-01
By combining the wavelet decomposition with kernel method, a practical approach of universal multi-scale wavelet kernels constructed in reproducing kernel Hilbert space (RKHS) is discussed, and an identifica-tion scheme using wavelet support vector machines ( WSVM ) estimator is proposed for nonlinear dynamic sys-tems. The good approximating properties of wavelet kernel function enhance the generalization ability of the pro-posed method, and the comparison of some numerical experimental results between the novel approach and some existing methods is encouraging.
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.
A Kernel-based Account of Bibliometric Measures
Ito, Takahiko; Shimbo, Masashi; Kudo, Taku; Matsumoto, Yuji
The application of kernel methods to citation analysis is explored. We show that a family of kernels on graphs provides a unified perspective on the three bibliometric measures that have been discussed independently: relatedness between documents, global importance of individual documents, and importance of documents relative to one or more (root) documents (relative importance). The framework provided by the kernels establishes relative importance as an intermediate between relatedness and global importance, in which the degree of `relativity,' or the bias between relatedness and importance, is naturally controlled by a parameter characterizing individual kernels in the family.
Isolation of bacterial endophytes from germinated maize kernels.
Rijavec, Tomaz; Lapanje, Ales; Dermastia, Marina; Rupnik, Maja
2007-06-01
The germination of surface-sterilized maize kernels under aseptic conditions proved to be a suitable method for isolation of kernel-associated bacterial endophytes. Bacterial strains identified by partial 16S rRNA gene sequencing as Pantoea sp., Microbacterium sp., Frigoribacterium sp., Bacillus sp., Paenibacillus sp., and Sphingomonas sp. were isolated from kernels of 4 different maize cultivars. Genus Pantoea was associated with a specific maize cultivar. The kernels of this cultivar were often overgrown with the fungus Lecanicillium aphanocladii; however, those exhibiting Pantoea growth were never colonized with it. Furthermore, the isolated bacterium strain inhibited fungal growth in vitro.
WAVELET KERNEL SUPPORT VECTOR MACHINES FOR SPARSE APPROXIMATION
Institute of Scientific and Technical Information of China (English)
Tong Yubing; Yang Dongkai; Zhang Qishan
2006-01-01
Wavelet, a powerful tool for signal processing, can be used to approximate the target function. For enhancing the sparse property of wavelet approximation, a new algorithm was proposed by using wavelet kernel Support Vector Machines (SVM), which can converge to minimum error with better sparsity. Here, wavelet functions would be firstly used to construct the admitted kernel for SVM according to Mercy theory; then new SVM with this kernel can be used to approximate the target funciton with better sparsity than wavelet approxiamtion itself. The results obtained by our simulation experiment show the feasibility and validity of wavelet kernel support vector machines.
AN ADAPTIVELY TRAINED KERNEL-BASED NONLINEAR REPRESENTOR FOR HANDWRITTEN DIGIT CLASSIFICATION
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
In practice, retraining a trained classifier is necessary when novel data become available. This paper adopts an incremental learning procedure to adaptively train a Kernel-based Nonlinear Representor(KNR), a recently presented nonlinear classifier for optimal pattern representation, so that its generalization ability may be evaluated in time-variant situation and a sparser representation is obtained for computationally intensive tasks. The addressed techniques are applied to handwritten digit classification to illustrate the feasibility for pattern recognition.
2006-11-01
kernel, describes each artifact developed during this process, and summarizes both the formal state machine model that underlies the Report...combines a number of well-known techniques for specify- ing and reasoning about security—e.g., a state machine model rep- resented both formally and in... state machine model , using the style introduced in [21, 23]. 2. Formally express the data separation property in terms of the inputs, state variables
42 Variability Bugs in the Linux Kernel
DEFF Research Database (Denmark)
Abal, Iago; Brabrand, Claus; Wasowski, Andrzej
2014-01-01
Feature-sensitive verification pursues effective analysis of the exponentially many variants of a program family. However, researchers lack examples of concrete bugs induced by variability, occurring in real large-scale systems. Such a collection of bugs is a requirement for goal-oriented research......, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 42 variability bugs collected from bug-fixing commits to the Linux kernel repository. We analyze each of the bugs, and record the results in a database. In addition, we...
40 Variability Bugs in the Linux Kernel
DEFF Research Database (Denmark)
Abal Rivas, Iago; Brabrand, Claus; Wasowski, Andrzej
2014-01-01
is a requirement for goal-oriented research, serving to evaluate tool implementations of feature-sensitive analyses by testing them on real bugs. We present a qualitative study of 40 variability bugs collected from bug-fixing commits to the Linux kernel repository. We investigate each of the 40 bugs, recording......Feature-sensitive verification is a recent field that pursues the effective analysis of the exponential number of variants of a program family. Today researchers lack examples of concrete bugs induced by variability, and occurring in real large-scale software. Such a collection of bugs...
Application of RBAC Model in System Kernel
Directory of Open Access Journals (Sweden)
Guan Keqing
2012-11-01
Full Text Available In the process of development of some technologies about Ubiquitous computing, the application of embedded intelligent devices is booming. Meanwhile, information security will face more serious threats than before. To improve the security of information terminal’s operation system, this paper analyzed the threats to system’s information security which comes from the abnormal operation by processes, and applied RBAC model into the safety management mechanism of operation system’s kernel. We built an access control model of system’s process, and proposed an implement framework. And the methods of implementation of the model for operation systems were illustrated.
SVM multiuser detection based on heuristic kernel
Institute of Scientific and Technical Information of China (English)
Yang Tao; Hu Bo
2007-01-01
A support vector machine (SVM) based multiuser detection (MUD) scheme in code-division multiple-access (CDMA) system is proposed. In this scheme, the equivalent support vector (SV) is obtained through a kernel sparsity approximation algorithm, which avoids the conventional costly quadratic programming (QP) procedure in SVM. Besides, the coefficient of the SV is attained through the solution to a generalized eigenproblem. Simulation results show that the proposed scheme has almost the same bit error rate (BER) as the standard SVM and is better than minimum mean square error (MMSE) scheme. Meanwhile, it has a low computation complexity.
Characterization of Flour from Avocado Seed Kernel
Macey A. Mahawan; Ma. Francia N. Tenorio; Jaycel A. Gomez; Rosenda A. Bronce
2015-01-01
The study focused on the Characterization of Flour from Avocado Seed Kernel. Based on the findings of the study the percentages of crude protein, crude fiber, crude fat, total carbohydrates, ash and moisture were 7.75, 4.91, 0.71, 74.65, 2.83 and 14.05 respectively. On the other hand the falling number was 495 seconds while gluten was below the detection limit of the method used. Moreover, the sensory evaluation in terms of color, texture and aroma in 0% proportion of Avocado seed flour was m...
Kernel based subspace projection of near infrared hyperspectral images of maize kernels
DEFF Research Database (Denmark)
Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben;
2009-01-01
In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...
Kernel based subspace projection of near infrared hyperspectral images of maize kernels
DEFF Research Database (Denmark)
Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben
2009-01-01
In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...
Directory of Open Access Journals (Sweden)
Xin Zhao
2017-01-01
Full Text Available Fungi infection in maize kernels is a major concern worldwide due to its toxic metabolites such as mycotoxins, thus it is necessary to develop appropriate techniques for early detection of fungi infection in maize kernels. Thirty-six sterilised maize kernels were inoculated each day with Aspergillus parasiticus from one to seven days, and then seven groups (D1, D2, D3, D4, D5, D6, D7 were determined based on the incubated time. Another 36 sterilised kernels without inoculation with fungi were taken as control (DC. Hyperspectral images of all kernels were acquired within spectral range of 921–2529 nm. Background, labels and bad pixels were removed using principal component analysis (PCA and masking. Separability computation for discrimination of fungal contamination levels indicated that the model based on the data of the germ region of individual kernels performed more effectively than on that of the whole kernels. Moreover, samples with a two-day interval were separable. Thus, four groups, DC, D1–2 (the group consisted of D1 and D2, D3–4 (D3 and D4, and D5–7 (D5, D6, and D7, were defined for subsequent classification. Two separate sample sets were prepared to verify the influence on a classification model caused by germ orientation, that is, germ up and the mixture of germ up and down with 1:1. Two smooth preprocessing methods (Savitzky-Golay smoothing, moving average smoothing and three scatter-correction methods (normalization, standard normal variate, and multiple scatter correction were compared, according to the performance of the classification model built by support vector machines (SVM. The best model for kernels with germ up showed the promising results with accuracies of 97.92% and 91.67% for calibration and validation data set, respectively, while accuracies of the best model for samples of the mixed kernels were 95.83% and 84.38%. Moreover, five wavelengths (1145, 1408, 1935, 2103, and 2383 nm were selected as the key
Regularized degenerate multi-solitons
Correa, Francisco
2016-01-01
We report complex PT-symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schroedinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Baecklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.
Degenerate Neutrinos and CP Violation
Ioannisian, A N
2003-01-01
We have studied mixing and masses of three left handed Majorana neutrinos in the model, which assumes exactly degenerate neutrino masses at some "neutrino unification" scale. Such a simple theoretical ansatz naturally leads to quasidegenerate neutrinos. The neutrino mass splittings induced by renormalization effects. In the model we found that the parameters of the neutrino physics (neutrino mass spectrum, mixing angles and CP violation phases) are strongly intercorrelated to each other. From these correlations we got strong bounds on the parameters which could be checked in the oscillation experiments.
Radiative seesaw and degenerate neutrinos
Bajc, B; Bajc, Borut; Senjanovic, Goran
2005-01-01
The radiative see-saw mechanism of Witten generates the right-handed neutrino masses in SO(10) with the spinorial 16_H Higgs field. We study here analytically the 2nd and 3rd generations for the minimal Yukawa structure containing 10_H and 120_H Higgs representations. In the approximation of small 2nd generation masses and gauge loop domination we find the following results : (1) b-tau unification, (2) natural coexistence between large theta_l and small theta_q, (3) degenerate neutrinos.
Ultra-low frequency shock dynamics in degenerate relativistic plasmas
Islam, S.; Sultana, S.; Mamun, A. A.
2017-09-01
A degenerate relativistic three-component plasma model is proposed for ultra-low frequency shock dynamics. A reductive perturbation technique is adopted, leading to Burgers' nonlinear partial differential equation. The properties of the shock waves are analyzed via the stationary shock wave solution for different plasma configuration parameters. The role of different intrinsic plasma parameters, especially the relativistic effects on the linear wave properties and also on the shock dynamics, is briefly discussed.
Ultrafast Degenerate Transient Lens Spectroscopy in Semiconductor Nanosctructures
Directory of Open Access Journals (Sweden)
Leontyev A.V.
2015-01-01
Full Text Available We report the non-resonant excitation and probing of the nonlinear refractive index change in bulk semiconductors and semiconductor quantum dots through degenerate transient lens spectroscopy. The signal oscillates at the center laser field frequency, and the envelope of the former in quantum dots is distinctly different from the one in bulk sample. We discuss the applicability of this technique for polarization state probing in semiconductor media with femtosecond temporal resolution.
A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.
Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying
2015-09-01
Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level.
Verification of helical tomotherapy delivery using autoassociative kernel regression.
Seibert, Rebecca M; Ramsey, Chester R; Garvey, Dustin R; Hines, J Wesley; Robison, Ben H; Outten, Samuel S
2007-08-01
Quality assurance (QA) is a topic of major concern in the field of intensity modulated radiation therapy (IMRT). The standard of practice for IMRT is to perform QA testing for individual patients to verify that the dose distribution will be delivered to the patient. The purpose of this study was to develop a new technique that could eventually be used to automatically evaluate helical tomotherapy treatments during delivery using exit detector data. This technique uses an autoassociative kernel regression (AAKR) model to detect errors in tomotherapy delivery. AAKR is a novel nonparametric model that is known to predict a group of correct sensor values when supplied a group of sensor values that is usually corrupted or contains faults such as machine failure. This modeling scheme is especially suited for the problem of monitoring the fluence values found in the exit detector data because it is able to learn the complex detector data relationships. This scheme still applies when detector data are summed over many frames with a low temporal resolution and a variable beam attenuation resulting from patient movement. Delivery sequences from three archived patients (prostate, lung, and head and neck) were used in this study. Each delivery sequence was modified by reducing the opening time for random individual multileaf collimator (MLC) leaves by random amounts. The errof and error-free treatments were delivered with different phantoms in the path of the beam. Multiple autoassociative kernel regression (AAKR) models were developed and tested by the investigators using combinations of the stored exit detector data sets from each delivery. The models proved robust and were able to predict the correct or error-free values for a projection, which had a single MLC leaf decrease its opening time by less than 10 msec. The model also was able to determine machine output errors. The average uncertainty value for the unfaulted projections ranged from 0.4% to 1.8% of the detector
Abdelfattah, Ahmad
2015-01-15
High performance computing (HPC) platforms are evolving to more heterogeneous configurations to support the workloads of various applications. The current hardware landscape is composed of traditional multicore CPUs equipped with hardware accelerators that can handle high levels of parallelism. Graphical Processing Units (GPUs) are popular high performance hardware accelerators in modern supercomputers. GPU programming has a different model than that for CPUs, which means that many numerical kernels have to be redesigned and optimized specifically for this architecture. GPUs usually outperform multicore CPUs in some compute intensive and massively parallel applications that have regular processing patterns. However, most scientific applications rely on crucial memory-bound kernels and may witness bottlenecks due to the overhead of the memory bus latency. They can still take advantage of the GPU compute power capabilities, provided that an efficient architecture-aware design is achieved. This dissertation presents a uniform design strategy for optimizing critical memory-bound kernels on GPUs. Based on hierarchical register blocking, double buffering and latency hiding techniques, this strategy leverages the performance of a wide range of standard numerical kernels found in dense and sparse linear algebra libraries. The work presented here focuses on matrix-vector multiplication kernels (MVM) as repre- sentative and most important memory-bound operations in this context. Each kernel inherits the benefits of the proposed strategies. By exposing a proper set of tuning parameters, the strategy is flexible enough to suit different types of matrices, ranging from large dense matrices, to sparse matrices with dense block structures, while high performance is maintained. Furthermore, the tuning parameters are used to maintain the relative performance across different GPU architectures. Multi-GPU acceleration is proposed to scale the performance on several devices. The
Dougherty, Andrew W.
Metal oxides are a staple of the sensor industry. The combination of their sensitivity to a number of gases, and the electrical nature of their sensing mechanism, make the particularly attractive in solid state devices. The high temperature stability of the ceramic material also make them ideal for detecting combustion byproducts where exhaust temperatures can be high. However, problems do exist with metal oxide sensors. They are not very selective as they all tend to be sensitive to a number of reduction and oxidation reactions on the oxide's surface. This makes sensors with large numbers of sensors interesting to study as a method for introducing orthogonality to the system. Also, the sensors tend to suffer from long term drift for a number of reasons. In this thesis I will develop a system for intelligently modeling metal oxide sensors and determining their suitability for use in large arrays designed to analyze exhaust gas streams. It will introduce prior knowledge of the metal oxide sensors' response mechanisms in order to produce a response function for each sensor from sparse training data. The system will use the same technique to model and remove any long term drift from the sensor response. It will also provide an efficient means for determining the orthogonality of the sensor to determine whether they are useful in gas sensing arrays. The system is based on least squares support vector regression using the reciprocal kernel. The reciprocal kernel is introduced along with a method of optimizing the free parameters of the reciprocal kernel support vector machine. The reciprocal kernel is shown to be simpler and to perform better than an earlier kernel, the modified reciprocal kernel. Least squares support vector regression is chosen as it uses all of the training points and an emphasis was placed throughout this research for extracting the maximum information from very sparse data. The reciprocal kernel is shown to be effective in modeling the sensor
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense.
Macular Degeneration Prevention and Risk Factors
... Grant Terms & Conditions Patent & Intellectual Property Policy For Current Awardees FAQs Our Funding Philosophy ... Alzheimer’s Disease Research Macular Degeneration Research National Glaucoma Research ...
[Pathogenesis of age-related macular degeneration].
Kaarniranta, Kai; Seitsonen, Sanna; Paimela, Tuomas; Meri, Seppo; Immonen, Ilkka
2009-01-01
Age-related macular degeneration is a multiform disease of the macula, the region responsible for detailed central vision. In recent years, plenty of new knowledge of the pathogenesis of this disease has been obtained, and the treatment of exudative macular degeneration has greatly progressed. The number of patients with age-related macular degeneration will multiply in the following decades, because knowledge of mechanisms of development of macular degeneration that could be subject to therapeutic measures is insufficient. Central underlying factors are genetic inheritance, exposure of the retina to chronic oxidative stress and accumulation of inflammation-inducing harmful proteins into or outside of retinal cells.
Structure of Degenerate Block Algebras
Institute of Scientific and Technical Information of China (English)
Linsheng Zhu; Daoji Meng
2003-01-01
Given a non-trivial torsion-free abelian group (A,+,0), a field F of characteristic 0, and a non-degenerate bi-additive skew-symmetric map φ :A × A → F, we define a Lie algebra ∑ = ∑(A, φ) over F with basis {ex | x ∈ A\\{0}}and Lie product [ex, ey] = φ(x, y)ex+y. We show that ∑ is endowed uniquely with a non-degenerate symmetric invariant bilinear form and the derivation algebra Der ∑ of ∑ is a complete Lie algebra. We describe the double extension D(∑, T) of ∑ by T, where T is spanned by the locally finite derivations of ∑, and determine the second cohomology group H2(D(∑,T),F) using anti-derivations related to the form on D(∑, T). Finally, we compute the second Leibniz cohomology groups HL2(∑, F) and HL2(D(∑, T), F).
Generalized Langevin equation with tempered memory kernel
Liemert, André; Sandev, Trifce; Kantz, Holger
2017-01-01
We study a generalized Langevin equation for a free particle in presence of a truncated power-law and Mittag-Leffler memory kernel. It is shown that in presence of truncation, the particle from subdiffusive behavior in the short time limit, turns to normal diffusion in the long time limit. The case of harmonic oscillator is considered as well, and the relaxation functions and the normalized displacement correlation function are represented in an exact form. By considering external time-dependent periodic force we obtain resonant behavior even in case of a free particle due to the influence of the environment on the particle movement. Additionally, the double-peak phenomenon in the imaginary part of the complex susceptibility is observed. It is obtained that the truncation parameter has a huge influence on the behavior of these quantities, and it is shown how the truncation parameter changes the critical frequencies. The normalized displacement correlation function for a fractional generalized Langevin equation is investigated as well. All the results are exact and given in terms of the three parameter Mittag-Leffler function and the Prabhakar generalized integral operator, which in the kernel contains a three parameter Mittag-Leffler function. Such kind of truncated Langevin equation motion can be of high relevance for the description of lateral diffusion of lipids and proteins in cell membranes.
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
The Kernel Estimation in Biosystems Engineering
Directory of Open Access Journals (Sweden)
Esperanza Ayuga Téllez
2008-04-01
Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.
Scientific Computing Kernels on the Cell Processor
Energy Technology Data Exchange (ETDEWEB)
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Khamatnurova, M. Y.; Gribanov, K. G.
2015-11-01
Levenberg-Marquardt method parameter selection for methane vertical profile retrieval from IASI/METOP spectra is presented. A modified technique for iterative calculation of averaging kernels and a posteriori errors for every spectrum is suggested. Known from literature method is expanded for the case of absence of a priori statistics for methane vertical profiles. Software for massive processing of IASI spectra using is developed. Effect of LM parameter selection on averaging kernel norm and a posteriori errors is illustrated. NCEP reanalysis data provided by ESRL (NOAA, Boulder, USA) were taken as initial guess. Surface temperature, temperature and humidity vertical profiles are retrieved before methane vertical profile retrieval.
Scheduler Activations on BSD: Sharing Thread Management Between Kernel and Application
Small, Christopher A.; Seltzer, Margo I.
1995-01-01
There are two commonly used thread models: kernel level threads and user level threads. Kernel level threads suffer from the cost of frequent user-kernel domain crossings and fixed kernel scheduling priorities. User level threads are not integrated with the kernel, blocking all threads whenever one thread is blocked. The Scheduler Activations model, proposed by Anderson et al. [ANDE91], combines kernel CPU al location decisions with application control over thread scheduling. This paper discu...
A multi-scale kernel bundle for LDDMM
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard
2011-01-01
The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...
THE COLLOCATION METHODS FOR SINGULAR INTEGRAL EQUATIONS WITH CAUCHY KERNELS
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
This paper applies the singular integral operators,singular quadrature operators and discretization matrices associated withsingular integral equations with Cauchy kernels, which are established in [1],to give a unified framework for various collocation methods of numericalsolutions of singular integral equations with Cauchy kernels. Under theframework, the coincidence of the direct quadrature method and the indirectquadrature method is very simple and obvious.
Extracting Feature Model Changes from the Linux Kernel Using FMDiff
Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.
2014-01-01
The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically ex
KERNEL IDEALS AND CONGRUENCES ON MS-ALGEBRAS
Institute of Scientific and Technical Information of China (English)
Luo Congwen; Zeng Yanlu
2006-01-01
In this article, the authors describe the largest congruence induced by a kernel ideal of an MS-algebra and characterize those MS-algebras on which all the congruences are in a one-to-one correspondence with the kernel ideals.
Integral Transform Methods: A Critical Review of Various Kernels
Orlandini, Giuseppina; Turro, Francesco
2017-03-01
Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.
Resolvent kernel for the Kohn Laplacian on Heisenberg groups
Directory of Open Access Journals (Sweden)
Neur Eddine Askour
2002-07-01
Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.
ON APPROXIMATION BY SPHERICAL REPRODUCING KERNEL HILBERT SPACES
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The spherical approximation between two nested reproducing kernels Hilbert spaces generated from different smooth kernels is investigated. It is shown that the functions of a space can be approximated by that of the subspace with better smoothness. Furthermore, the upper bound of approximation error is given.
End-use quality of soft kernel durum wheat
Kernel texture is a major determinant of end-use quality of wheat. Durum wheat is known for its very hard texture, which influences how it is milled and for what products it is well suited. We developed soft kernel durum wheat lines via Ph1b-mediated homoeologous recombination with Dr. Leonard Joppa...
Optimal Bandwidth Selection in Observed-Score Kernel Equating
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Comparison of Kernel Equating and Item Response Theory Equating Methods
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
Integral transform methods: a critical review of various kernels
Orlandini, Giuseppina
2016-01-01
Some general remarks about integral transform approaches to response functions are made. Their advantage for calculating cross sections at energies in the continuum is stressed. In particular we discuss the class of kernels that allow calculations of the transform by matrix diagonalization. A particular set of such kernels, namely the wavelets, is tested in a model study.
Indigenous Methods in Preserving Bush Mango Kernels in Cameroon
Directory of Open Access Journals (Sweden)
Zac Tchoundjeu
2005-01-01
Full Text Available Traditional practices for preserving Irvingia wombolu and Irvingia gabonensis (bush mango kernels were assessed in a survey covering twelve villages (Dongo, Bouno, Gribi [East], Elig-Nkouma, Nkom I, Ngoumou [Centre], Bidjap, Nko’ovos, Ondodo [South], Besong-Abang, Ossing and Kembong [Southwest], in the humid lowland forest zone of Cameroon. All the interviewed households that own trees of species were found to preserve kernels in periods of abundance, excluding Elig-Nkouma (87.5%. Eighty nine and 85% did so in periods of scarcity for I. wombolu and I. gabonensis respectively. Seventeen and twenty-nine kernel preservation practices were recorded for I. wombolu and I. gabonensis respectively. Most were based on continuous heating of the kernels or kernel by-products (cakes. The most commonly involved keeping the sun-dried kernels in a plastic bag on a bamboo rack hung above the fireplace in the kitchen. A 78% of interviews households reported preserving I. wombolu kernels for less than one year while 22% preserved it for more than one year with 1.9% for two years, the normal length of the off-season period for trees in the wild. Cakes wrapped with leaves and kept on a bamboo rack hung over the fireplace were reported by households in the East and South provinces to store Irvingia gabonensis longer (more than one year. Further studies on the utilization of heat for preserving and canning bush mango kernels are recommended.
THE HEAT KERNEL ON THE CAYLEY HEISENBERG GROUP
Institute of Scientific and Technical Information of China (English)
Luan Jingwen; Zhu Fuliu
2005-01-01
The authors obtain an explicit expression of the heat kernel for the Cayley Heisenberg group of order n by using the stochastic integral method of Gaveau. Apart from the standard Heisenberg group and the quaternionic Heisenberg group, this is the only nilpotent Lie group on which an explicit formula for the heat kernel has been obtained.
Oven-drying reduces ruminal starch degradation in maize kernels
Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.
2014-01-01
The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels (consist
Open Problem: Kernel methods on manifolds and metric spaces
DEFF Research Database (Denmark)
Feragen, Aasa; Hauberg, Søren
2016-01-01
Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...
Efficient Kernel-based 2DPCA for Smile Stages Recognition
Directory of Open Access Journals (Sweden)
Fitri Damayanti
2012-03-01
Full Text Available Recently, an approach called two-dimensional principal component analysis (2DPCA has been proposed for smile stages representation and recognition. The essence of 2DPCA is that it computes the eigenvectors of the so-called image covariance matrix without matrix-to-vector conversion so the size of the image covariance matrix are much smaller, easier to evaluate covariance matrix, computation cost is reduced and the performance is also improved than traditional PCA. In an effort to improve and perfect the performance of smile stages recognition, in this paper, we propose efficient Kernel based 2DPCA concepts. The Kernelization of 2DPCA can be benefit to develop the nonlinear structures in the input data. This paper discusses comparison of standard Kernel based 2DPCA and efficient Kernel based 2DPCA for smile stages recognition. The results of experiments show that Kernel based 2DPCA achieve better performance in comparison with the other approaches. While the use of efficient Kernel based 2DPCA can speed up the training procedure of standard Kernel based 2DPCA thus the algorithm can achieve much more computational efficiency and remarkably save the memory consuming compared to the standard Kernel based 2DPCA.
Kernel maximum autocorrelation factor and minimum noise fraction transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
) dimensional feature space via the kernel function and then performing a linear analysis in that space. Three examples show the very successful application of kernel MAF/MNF analysis to 1) change detection in DLR 3K camera data recorded 0.7 seconds apart over a busy motorway, 2) change detection...
Oven-drying reduces ruminal starch degradation in maize kernels
Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.
2014-01-01
The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels
Lp-boundedness of flag kernels on homogeneous groups
Glowacki, Pawel
2010-01-01
We prove that the flag kernel singular integral operators of Nagel-Ricci-Stein on a homogeneous group are bounded on the Lp spaces. The gradation associated with the kernels is the natural gradation of the underlying Lie algebra. Our main tools are the Littlewood-Paley theory and a symbolic calculus combined in the spirit of Duoandikoetxea and Rubio de Francia.
Extracting Feature Model Changes from the Linux Kernel Using FMDiff
Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.
2014-01-01
The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically
Gaussian kernel operators on white noise functional spaces
Institute of Scientific and Technical Information of China (English)
骆顺龙; 严加安
2000-01-01
The Gaussian kernel operators on white noise functional spaces, including second quantization, Fourier-Mehler transform, scaling, renormalization, etc. are studied by means of symbol calculus, and characterized by the intertwining relations with annihilation and creation operators. The infinitesimal generators of the Gaussian kernel operators are second order white noise operators of which the number operator and the Gross Laplacian are particular examples.
Evolutionary optimization of kernel weights improves protein complex comembership prediction.
Hulsman, Marc; Reinders, Marcel J T; de Ridder, Dick
2009-01-01
In recent years, more and more high-throughput data sources useful for protein complex prediction have become available (e.g., gene sequence, mRNA expression, and interactions). The integration of these different data sources can be challenging. Recently, it has been recognized that kernel-based classifiers are well suited for this task. However, the different kernels (data sources) are often combined using equal weights. Although several methods have been developed to optimize kernel weights, no large-scale example of an improvement in classifier performance has been shown yet. In this work, we employ an evolutionary algorithm to determine weights for a larger set of kernels by optimizing a criterion based on the area under the ROC curve. We show that setting the right kernel weights can indeed improve performance. We compare this to the existing kernel weight optimization methods (i.e., (regularized) optimization of the SVM criterion or aligning the kernel with an ideal kernel) and find that these do not result in a significant performance improvement and can even cause a decrease in performance. Results also show that an expert approach of assigning high weights to features with high individual performance is not necessarily the best strategy.
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps
DEFF Research Database (Denmark)
Rasmussen, Peter Mondrup; Madsen, Kristoffer Hougaard; Lund, Torben Ellegaard
2011-01-01
There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus......, and conclude that the sensitivity map is a versatile and computationally efficient tool for visualization of nonlinear kernel models in neuroimaging....
Directory of Open Access Journals (Sweden)
W. Chen
2015-11-01
Full Text Available Drought caused the most widespread damage in China, making up over 50 % of the total affected area nationwide in recent decades. In the paper, a Standardized Precipitation Index-based (SPI-based drought risk study is conducted using historical rainfall data of 19 weather stations in Shandong province, China. Kernel density based method is adopted to carry out the risk analysis. Comparison between the bivariate Gaussian kernel density estimation (GKDE and diffusion kernel density estimation (DKDE are carried out to analyze the effect of drought intensity and drought duration. The results show that DKDE is relatively more accurate without boundary-leakage. Combined with the GIS technique, the drought risk is presented which reveals the spatial and temporal variation of agricultural droughts for corn in Shandong. The estimation provides a different way to study the occurrence frequency and severity of drought risk from multiple perspectives.
Higher Order Kernels and Locally Affine LDDMM Registration
Sommer, Stefan; Darkner, Sune; Pennec, Xavier
2011-01-01
To achieve sparse description that allows intuitive analysis, we aim to represent deformation with a basis containing interpretable elements, and we wish to use elements that have the description capacity to represent the deformation compactly. We accomplish this by introducing higher order kernels in the LDDMM registration framework. The kernels allow local description of affine transformations and subsequent compact description of non-translational movement and of the entire non-rigid deformation. This is obtained with a representation that contains directly interpretable information from both mathematical and modeling perspectives. We develop the mathematical construction behind the higher order kernels, we show the implications for sparse image registration and deformation description, and we provide examples of how the capacity of the kernels enables registration with a very low number of parameters. The capacity and interpretability of the kernels lead to natural modeling of articulated movement, and th...
Hypothesis testing using pairwise distances and associated kernels
Sejdinovic, Dino; Sriperumbudur, Bharath; Fukumizu, Kenji
2012-01-01
We provide a unifying framework linking two classes of statistics used in two-sample and independence testing: on the one hand, the energy distances and distance covariances from the statistics literature; on the other, distances between embeddings of distributions to reproducing kernel Hilbert spaces (RKHS), as established in machine learning. The equivalence holds when energy distances are computed with semimetrics of negative type, in which case a kernel may be defined such that the RKHS distance between distributions corresponds exactly to the energy distance. We determine the class of probability distributions for which kernels induced by semimetrics are characteristic (that is, for which embeddings of the distributions to an RKHS are injective). Finally, we investigate the performance of this family of kernels in two-sample and independence tests: we show in particular that the energy distance most commonly employed in statistics is just one member of a parametric family of kernels, and that other choic...
An iterative modified kernel based on training data
Institute of Scientific and Technical Information of China (English)
Zhi-xiang ZHOU; Feng-qing HAN
2009-01-01
To improve performance of a support vector regression, a new method for a modified kernel function is proposed. In this method, information of all samples is included in the kernel function with conformal mapping. Thus the kernel function is data-dependent. With a random initial parameter, the kernel function is modified re-peatedly until a satisfactory result is achieved. Compared with the conventional model, the improved approach does not need to select parameters of the kernel function. Sim-ulation is carried out for the one-dimension continuous function and a case of strong earthquakes. The results show that the improved approach has better learning ability and forecasting precision than the traditional model. With the increase of the iteration number, the figure of merit decreases and converges. The speed of convergence depends on the parameters used in the algorithm.
Nonlinear Statistical Process Monitoring and Fault Detection Using Kernel ICA
Institute of Scientific and Technical Information of China (English)
ZHANG Xi; YAN Wei-wu; ZHAO Xu; SHAO Hui-he
2007-01-01
A novel nonlinear process monitoring and fault detection method based on kernel independent component analysis (ICA) is proposed. The kernel ICA method is a two-phase algorithm: whitened kernel principal component (KPCA) plus ICA. KPCA spheres data and makes the data structure become as linearly separable as possible by virtue of an implicit nonlinear mapping determined by kernel. ICA seeks the projection directions in the KPCA whitened space, making the distribution of the projected data as non-gaussian as possible. The application to the fluid catalytic cracking unit (FCCU) simulated process indicates that the proposed process monitoring method based on kernel ICA can effectively capture the nonlinear relationship in process variables. Its performance significantly outperforms monitoring method based on ICA or KPCA.
Anatomically-aided PET reconstruction using the kernel method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
OSKI: A Library of Automatically Tuned Sparse Matrix Kernels
Energy Technology Data Exchange (ETDEWEB)
Vuduc, R; Demmel, J W; Yelick, K A
2005-07-19
The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decision-making process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.
OSKI: A library of automatically tuned sparse matrix kernels
Vuduc, Richard; Demmel, James W.; Yelick, Katherine A.
2005-01-01
The Optimized Sparse Kernel Interface (OSKI) is a collection of low-level primitives that provide automatically tuned computational kernels on sparse matrices, for use by solver libraries and applications. These kernels include sparse matrix-vector multiply and sparse triangular solve, among others. The primary aim of this interface is to hide the complex decisionmaking process needed to tune the performance of a kernel implementation for a particular user's sparse matrix and machine, while also exposing the steps and potentially non-trivial costs of tuning at run-time. This paper provides an overview of OSKI, which is based on our research on automatically tuned sparse kernels for modern cache-based superscalar machines.
CRKSPH - A Conservative Reproducing Kernel Smoothed Particle Hydrodynamics Scheme
Frontiere, Nicholas; Owen, J Michael
2016-01-01
We present a formulation of smoothed particle hydrodynamics (SPH) that employs a first-order consistent reproducing kernel function, exactly interpolating linear fields with particle tracers. Previous formulations using reproducing kernel (RK) interpolation have had difficulties maintaining conservation of momentum due to the fact the RK kernels are not, in general, spatially symmetric. Here, we utilize a reformulation of the fluid equations such that mass, momentum, and energy are all manifestly conserved without any assumption about kernel symmetries. Additionally, by exploiting the increased accuracy of the RK method's gradient, we formulate a simple limiter for the artificial viscosity that reduces the excess diffusion normally incurred by the ordinary SPH artificial viscosity. Collectively, we call our suite of modifications to the traditional SPH scheme Conservative Reproducing Kernel SPH, or CRKSPH. CRKSPH retains the benefits of traditional SPH methods (such as preserving Galilean invariance and manif...
A novel extended kernel recursive least squares algorithm.
Zhu, Pingping; Chen, Badong; Príncipe, José C
2012-08-01
In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.
Virtual screening with support vector machines and structure kernels
Mahé, Pierre
2007-01-01
Support vector machines and kernel methods have recently gained considerable attention in chemoinformatics. They offer generally good performance for problems of supervised classification or regression, and provide a flexible and computationally efficient framework to include relevant information and prior knowledge about the data and problems to be handled. In particular, with kernel methods molecules do not need to be represented and stored explicitly as vectors or fingerprints, but only to be compared to each other through a comparison function technically called a kernel. While classical kernels can be used to compare vector or fingerprint representations of molecules, completely new kernels were developed in the recent years to directly compare the 2D or 3D structures of molecules, without the need for an explicit vectorization step through the extraction of molecular descriptors. While still in their infancy, these approaches have already demonstrated their relevance on several toxicity prediction and s...
3-D waveform tomography sensitivity kernels for anisotropic media
Djebbi, R.
2014-01-01
The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.
Genomic Prediction of Genotype × Environment Interaction Kernel Regression Models.
Cuevas, Jaime; Crossa, José; Soberanis, Víctor; Pérez-Elizalde, Sergio; Pérez-Rodríguez, Paulino; Campos, Gustavo de Los; Montesinos-López, O A; Burgueño, Juan
2016-11-01
In genomic selection (GS), genotype × environment interaction (G × E) can be modeled by a marker × environment interaction (M × E). The G × E may be modeled through a linear kernel or a nonlinear (Gaussian) kernel. In this study, we propose using two nonlinear Gaussian kernels: the reproducing kernel Hilbert space with kernel averaging (RKHS KA) and the Gaussian kernel with the bandwidth estimated through an empirical Bayesian method (RKHS EB). We performed single-environment analyses and extended to account for G × E interaction (GBLUP-G × E, RKHS KA-G × E and RKHS EB-G × E) in wheat ( L.) and maize ( L.) data sets. For single-environment analyses of wheat and maize data sets, RKHS EB and RKHS KA had higher prediction accuracy than GBLUP for all environments. For the wheat data, the RKHS KA-G × E and RKHS EB-G × E models did show up to 60 to 68% superiority over the corresponding single environment for pairs of environments with positive correlations. For the wheat data set, the models with Gaussian kernels had accuracies up to 17% higher than that of GBLUP-G × E. For the maize data set, the prediction accuracy of RKHS EB-G × E and RKHS KA-G × E was, on average, 5 to 6% higher than that of GBLUP-G × E. The superiority of the Gaussian kernel models over the linear kernel is due to more flexible kernels that accounts for small, more complex marker main effects and marker-specific interaction effects.
Automatic detection of aflatoxin contaminated corn kernels using dual-band imagery
Ononye, Ambrose E.; Yao, Haibo; Hruska, Zuzana; Kincaid, Russell; Brown, Robert L.; Cleveland, Thomas E.
2009-05-01
Aflatoxin is a mycotoxin predominantly produced by Aspergillus flavus and Aspergillus parasitiucus fungi that grow naturally in corn, peanuts and in a wide variety of other grain products. Corn, like other grains is used as food for human and feed for animal consumption. It is known that aflatoxin is carcinogenic; therefore, ingestion of corn infected with the toxin can lead to very serious health problems such as liver damage if the level of the contamination is high. The US Food and Drug Administration (FDA) has strict guidelines for permissible levels in the grain products for both humans and animals. The conventional approach used to determine these contamination levels is one of the destructive and invasive methods that require corn kernels to be ground and then chemically analyzed. Unfortunately, each of the analytical methods can take several hours depending on the quantity, to yield a result. The development of high spectral and spatial resolution imaging sensors has created an opportunity for hyperspectral image analysis to be employed for aflatoxin detection. However, this brings about a high dimensionality problem as a setback. In this paper, we propose a technique that automatically detects aflatoxin contaminated corn kernels by using dual-band imagery. The method exploits the fluorescence emission spectra from corn kernels captured under 365 nm ultra-violet light excitation. Our approach could lead to a non-destructive and non-invasive way of quantifying the levels of aflatoxin contamination. The preliminary results shown here, demonstrate the potential of our technique for aflatoxin detection.
Indian Academy of Sciences (India)
Pravin K Gupta; Sri Niwas; Neeta Chaudhary
2006-06-01
The computation of electromagnetic (EM)ﬁelds,for 1-D layered earth model,requires evaluation of Hankel Transform (HT)of the EM kernel function.The digital ﬁltering is the most widely used technique to evaluate HT integrals.However,it has some obvious shortcomings.We present an alternative scheme,based on an orthonormal exponential approximation of the kernel function, for evaluating HT integrals.This approximation of the kernel function was chosen because the analytical solution of HT of an exponential function is readily available in literature.This expansion reduces the integral to a simple algebraic sum.The implementation of such a scheme requires that the weights and the exponents of the exponential function be estimated.The exponents were estimated through a guided search algorithm while the weights were obtained using Marquardt matrix inversion method.The algorithm was tested on analytical HT pairs available in literature. The results are compared with those obtained using the digital ﬁltering technique with Anderson ﬁlters.The ﬁeld curves for four types (A-,K-,H-and Q-type)of 3-layer earth models are generated using the present scheme and compared with the corresponding curves obtained using the Anderson scheme.It is concluded that the present scheme is more accurate than the Anderson scheme.
Projection of fMRI data onto the cortical surface using anatomically-informed convolution kernels.
Operto, G; Bulot, R; Anton, J-L; Coulon, O
2008-01-01
As surface-based data analysis offer an attractive approach for intersubject matching and comparison, the projection of voxel-based 3D volumes onto the cortical surface is an essential problem. We present here a method that aims at producing representations of functional brain data on the cortical surface from functional MRI volumes. Such representations are for instance required for subsequent cortical-based functional analysis. We propose a projection technique based on the definition, around each node of the gray/white matter interface mesh, of convolution kernels whose shape and distribution rely on the geometry of the local anatomy. For one anatomy, a set of convolution kernels is computed that can be used to project any functional data registered with this anatomy. Therefore resulting in anatomically-informed projections of data onto the cortical surface, this kernel-based approach offers better sensitivity, specificity than other classical methods and robustness to misregistration errors. Influences of mesh and volumes spatial resolutions were also estimated for various projection techniques, using simulated functional maps.
Genetic association studies in lumbar disc degeneration
DEFF Research Database (Denmark)
Eskola, Pasi J; Lemmelä, Susanna; Kjaer, Per
2012-01-01
Low back pain is associated with lumbar disc degeneration, which is mainly due to genetic predisposition. The objective of this study was to perform a systematic review to evaluate genetic association studies in lumbar disc degeneration as defined on magnetic resonance imaging (MRI) in humans....
Degenerated differential pair with controllable transconductance
Mensink, Clemens; Mensink, Clemens H.J.; Nauta, Bram
1998-01-01
A differential pair with input transistors and provided with a variable degeneration resistor. The degeneration resistor comprises a series arrangement of two branches of coupled resistors which are shunted in mutually corresponding points by respective control transistors whose gates are interconne
Regularized degenerate multi-solitons
Correa, Francisco; Fring, Andreas
2016-09-01
We report complex {P}{T} -symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.
Naturalness of nearly degenerate neutrinos
Casas, J A; Ibarra, Alejandro; Navarro, I
1999-01-01
If neutrinos are to play a relevant cosmological role, they must be essentially degenerate. We study whether radiative corrections can or cannot be responsible for the small mass splittings, in agreement with all the available experimental data. We perform an exhaustive exploration of the bimaximal mixing scenario, finding that (i) the vacuum oscillations solution to the solar neutrino problem is always excluded; (ii) if the mass matrix is produced by a see-saw mechanism, there are large regions of the parameter space consistent with the large angle MSW solution, providing a natural origin for the Delta m^2_{sol} << Delta m^2_{atm} hierarchy; (iii) the bimaximal structure becomes then stable under radiative corrections. We also provide analytical expressions for the mass splittings and mixing angles and present a particularly simple see-saw ansatz consistent with all the observations.
Degenerate doping of metallic anodes
Friesen, Cody A; Zeller, Robert A; Johnson, Paul B; Switzer, Elise E
2015-05-12
Embodiments of the invention relate to an electrochemical cell comprising: (i) a fuel electrode comprising a metal fuel, (ii) a positive electrode, (iii) an ionically conductive medium, and (iv) a dopant; the electrodes being operable in a discharge mode wherein the metal fuel is oxidized at the fuel electrode and the dopant increases the conductivity of the metal fuel oxidation product. In an embodiment, the oxidation product comprises an oxide of the metal fuel which is doped degenerately. In an embodiment, the positive electrode is an air electrode that absorbs gaseous oxygen, wherein during discharge mode, oxygen is reduced at the air electrode. Embodiments of the invention also relate to methods of producing an electrode comprising a metal and a doped metal oxidation product.
Directory of Open Access Journals (Sweden)
Omar Abu Arqub
2012-01-01
Full Text Available This paper investigates the numerical solution of nonlinear Fredholm-Volterra integro-differential equations using reproducing kernel Hilbert space method. The solution ( is represented in the form of series in the reproducing kernel space. In the mean time, the n-term approximate solution ( is obtained and it is proved to converge to the exact solution (. Furthermore, the proposed method has an advantage that it is possible to pick any point in the interval of integration and as well the approximate solution and its derivative will be applicable. Numerical examples are included to demonstrate the accuracy and applicability of the presented technique. The results reveal that the method is very effective and simple.
Haben, Stephen
2016-01-01
We present a model for generating probabilistic forecasts by combining kernel density estimation (KDE) and quantile regression techniques, as part of the probabilistic load forecasting track of the Global Energy Forecasting Competition 2014. The KDE method is initially implemented with a time-decay parameter. We later improve this method by conditioning on the temperature or the period of the week variables to provide more accurate forecasts. Secondly, we develop a simple but effective quantile regression forecast. The novel aspects of our methodology are two-fold. First, we introduce symmetry into the time-decay parameter of the kernel density estimation based forecast. Secondly we combine three probabilistic forecasts with different weights for different periods of the month.
The heat kernel for two Aharonov-Bohm solenoids in a uniform magnetic field
Šťovíček, Pavel
2017-01-01
A non-relativistic quantum model is considered with a point particle carrying a charge e and moving in the plane pierced by two infinitesimally thin Aharonov-Bohm solenoids and subjected to a perpendicular uniform magnetic field of magnitude B. Relying on a technique originally due to Schulman, Laidlaw and DeWitt which is applicable to Schrödinger operators on multiply connected configuration manifolds a formula is derived for the corresponding heat kernel. As an application of the heat kernel formula, approximate asymptotic expressions are derived for the lowest eigenvalue lying above the first Landau level and for the corresponding eigenfunction while assuming that | eB | R2 /(ħ c) is large, where R is the distance between the two solenoids.
Stahel-Donoho kernel estimation for fixed design nonparametric regression models
Institute of Scientific and Technical Information of China (English)
LIN; Lu
2006-01-01
This paper reports a robust kernel estimation for fixed design nonparametric regression models.A Stahel-Donoho kernel estimation is introduced,in which the weight functions depend on both the depths of data and the distances between the design points and the estimation points.Based on a local approximation,a computational technique is given to approximate to the incomputable depths of the errors.As a result the new estimator is computationally efficient.The proposed estimator attains a high breakdown point and has perfect asymptotic behaviors such as the asymptotic normality and convergence in the mean squared error.Unlike the depth-weighted estimator for parametric regression models,this depth-weighted nonparametric estimator has a simple variance structure and then we can compare its efficiency with the original one.Some simulations show that the new method can smooth the regression estimation and achieve some desirable balances between robustness and efficiency.
Mine-hoist fault-condition detection based on the wavelet packet transform and kernel PCA
Institute of Scientific and Technical Information of China (English)
XIA Shi-xiong; NIU Qiang; ZHOU Yong; ZHANG Lei
2008-01-01
A new algorithm was developed to correctly identify fault conditions and accurately monitor fault development in a mine hoist. The new method is based on the Wavelet Packet Transform (WPT) and kernel PCA (Kernel Principal Component Analysis, KPCA). For non-linear monitoring systems the key to fault detection is the extracting of main features. The wavelet packet transform is a novel technique of signal processing that possesses excellent characteristics of time-frequency localization. It is suitable for analysing time-varying or transient signals. KPCA maps the original input features into a higher dimension feature space through a non-linear mapping. The principal components are then found in the higher dimension feature space. The KPCA transformation was applied to extracting the main nonlinear features from experimental fault feature data after wavelet packet transformation. The results show that the proposed method affords credible fault detection and identification.
Heat kernel method and its applications
Avramidi, Ivan G
2015-01-01
The heart of the book is the development of a short-time asymptotic expansion for the heat kernel. This is explained in detail and explicit examples of some advanced calculations are given. In addition some advanced methods and extensions, including path integrals, jump diffusion and others are presented. The book consists of four parts: Analysis, Geometry, Perturbations and Applications. The first part shortly reviews of some background material and gives an introduction to PDEs. The second part is devoted to a short introduction to various aspects of differential geometry that will be needed later. The third part and heart of the book presents a systematic development of effective methods for various approximation schemes for parabolic differential equations. The last part is devoted to applications in financial mathematics, in particular, stochastic differential equations. Although this book is intended for advanced undergraduate or beginning graduate students in, it should also provide a useful reference ...
Image Processing Variations with Analytic Kernels
Garnett, John B; Vese, Luminita A
2012-01-01
Let $f\\in L^1(\\R^d)$ be real. The Rudin-Osher-Fatemi model is to minimize $\\|u\\|_{\\dot{BV}}+\\lambda\\|f-u\\|_{L^2}^2$, in which one thinks of $f$ as a given image, $\\lambda > 0$ as a "tuning parameter", $u$ as an optimal "cartoon" approximation to $f$, and $f-u$ as "noise" or "texture". Here we study variations of the R-O-F model having the form $\\inf_u\\{\\|u\\|_{\\dot{BV}}+\\lambda \\|K*(f-u)\\|_{L^p}^q\\}$ where $K$ is a real analytic kernel such as a Gaussian. For these functionals we characterize the minimizers $u$ and establish several of their properties, including especially their smoothness properties. In particular we prove that on any open set on which $u \\in W^{1,1}$ and $\
The Dynamical Kernel Scheduler - Part 1
Adelmann, Andreas; Suter, Andreas
2015-01-01
Emerging processor architectures such as GPUs and Intel MICs provide a huge performance potential for high performance computing. However developing software using these hardware accelerators introduces additional challenges for the developer such as exposing additional parallelism, dealing with different hardware designs and using multiple development frameworks in order to use devices from different vendors. The Dynamic Kernel Scheduler (DKS) is being developed in order to provide a software layer between host application and different hardware accelerators. DKS handles the communication between the host and device, schedules task execution, and provides a library of built-in algorithms. Algorithms available in the DKS library will be written in CUDA, OpenCL and OpenMP. Depending on the available hardware, the DKS can select the appropriate implementation of the algorithm. The first DKS version was created using CUDA for the Nvidia GPUs and OpenMP for Intel MIC. DKS was further integrated in OPAL (Object-or...
Index-free Heat Kernel Coefficients
De van Ven, A E M
1998-01-01
Using index-free notation, we present the diagonal values of the first five heat kernel coefficients associated with a general Laplace-type operator on a compact Riemannian space without boundary. The fifth coefficient appears here for the first time. For a flat space with a gauge connection, the sixth coefficient is given too. Also provided are the leading terms for any coefficient, both in ascending and descending powers of the Yang-Mills and Riemann curvatures, to the same order as required for the fourth coefficient. These results are obtained by directly solving the relevant recursion relations, working in Fock-Schwinger gauge and Riemann normal coordinates. Our procedure is thus noncovariant, but we show that for any coefficient the `gauged' respectively `curved' version is found from the corresponding `non-gauged' respectively `flat' coefficient by making some simple covariant substitutions. These substitutions being understood, the coefficients retain their `flat' form and size. In this sense the fift...
Kernel density estimation using graphical processing unit
Sunarko, Su'ud, Zaki
2015-09-01
Kernel density estimation for particles distributed over a 2-dimensional space is calculated using a single graphical processing unit (GTX 660Ti GPU) and CUDA-C language. Parallel calculations are done for particles having bivariate normal distribution and by assigning calculations for equally-spaced node points to each scalar processor in the GPU. The number of particles, blocks and threads are varied to identify favorable configuration. Comparisons are obtained by performing the same calculation using 1, 2 and 4 processors on a 3.0 GHz CPU using MPICH 2.0 routines. Speedups attained with the GPU are in the range of 88 to 349 times compared the multiprocessor CPU. Blocks of 128 threads are found to be the optimum configuration for this case.
Learning RoboCup-Keepaway with Kernels
Jung, Tobias
2012-01-01
We apply kernel-based methods to solve the difficult reinforcement learning problem of 3vs2 keepaway in RoboCup simulated soccer. Key challenges in keepaway are the high-dimensionality of the state space (rendering conventional discretization-based function approximation like tilecoding infeasible), the stochasticity due to noise and multiple learning agents needing to cooperate (meaning that the exact dynamics of the environment are unknown) and real-time learning (meaning that an efficient online implementation is required). We employ the general framework of approximate policy iteration with least-squares-based policy evaluation. As underlying function approximator we consider the family of regularization networks with subset of regressors approximation. The core of our proposed solution is an efficient recursive implementation with automatic supervised selection of relevant basis functions. Simulation results indicate that the behavior learned through our approach clearly outperforms the best results obta...
Heat kernel measures on random surfaces
Klevtsov, Semyon
2015-01-01
The heat kernel on the symmetric space of positive definite Hermitian matrices is used to endow the spaces of Bergman metrics of degree k on a Riemann surface M with a family of probability measures depending on a choice of the background metric. Under a certain matrix-metric correspondence, each positive definite Hermitian matrix corresponds to a Kahler metric on M. The one and two point functions of the random metric are calculated in a variety of limits as k and t tend to infinity. In the limit when the time t goes to infinity the fluctuations of the random metric around the background metric are the same as the fluctuations of random zeros of holomorphic sections. This is due to the fact that the random zeros form the boundary of the space of Bergman metrics.
Pattern Programmable Kernel Filter for Bot Detection
Directory of Open Access Journals (Sweden)
Kritika Govind
2012-05-01
Full Text Available Bots earn their unique name as they perform a wide variety of automated task. These tasks include stealing sensitive\tuser\tinformation. Detection of bots using solutions such as behavioral\tcorrelation of\tflow\trecords,\tgroup activity\tin DNS traffic, observing\tthe periodic repeatability in\tcommunication, etc., lead to monitoring\tthe network traffic\tand\tthen\tclassifying them as Bot or normal traffic.\tOther solutions for\tBot detection\tinclude kernel level key stroke verification, system call initialization,\tIP black listing, etc. In the first\ttwo solutions\tthere is no assurance\tthat\tthe\tpacket carrying user information is prevented from being sent to the attacker and the latter suffers from the problem of\tIP spoofing. This motivated\tus to think\tof a solution that would\tfilter\tout\tthe malicious\tpackets\tbefore being\tput onto\tthe network. To come out with such a\tsolution,\ta real time\tbot\tattack\twas\tgenerated with SpyEye Exploit kit and traffic\tcharacteristics were analyzed. The analysis revealed the existence\tof a unique repeated communication\tbetween\tthe Zombie machine\tand\tthe botmaster. This motivated us to propose, a Pattern\tProgrammable Kernel\tFilter (PPKF\tfor filtering out the malicious\tpackets generated by bots.\tPPKF was developed\tusing the\twindows\tfiltering platform (WFP filter engine.\tPPKF was programmed to\tfilter\tout\tthe\tpackets\twith\tunique pattern which were observed\tfrom\tthe\tbot\tattack experiments. Further\tPPKF was found\tto completely suppress the\tflow\tof packets having the programmed uniqueness in them thus preventing the functioning of bots in terms of user information being sent to the Botmaster.
Nonlinear electromagnetic waves in a degenerate electron-positron plasma
Energy Technology Data Exchange (ETDEWEB)
El-Labany, S.K., E-mail: skellabany@hotmail.com [Department of Physics, Faculty of Science, Damietta University, New Damietta (Egypt); El-Taibany, W.F., E-mail: eltaibany@hotmail.com [Department of Physics, College of Science for Girls in Abha, King Khalid University, Abha (Saudi Arabia); El-Samahy, A.E.; Hafez, A.M.; Atteya, A., E-mail: ahmedsamahy@yahoo.com, E-mail: am.hafez@sci.alex.edu.eg, E-mail: ahmed_ateya2002@yahoo.com [Department of Physics, Faculty of Science, Alexandria University, Alexandria (Egypt)
2015-08-15
Using the reductive perturbation technique (RPT), the nonlinear propagation of magnetosonic solitary waves in an ultracold, degenerate (extremely dense) electron-positron (EP) plasma (containing ultracold, degenerate electron, and positron fluids) is investigated. The set of basic equations is reduced to a Korteweg-de Vries (KdV) equation for the lowest-order perturbed magnetic field and to a KdV type equation for the higher-order perturbed magnetic field. The solutions of these evolution equations are obtained. For better accuracy and searching on new features, the new solutions are analyzed numerically based on compact objects (white dwarf) parameters. It is found that including the higher-order corrections results as a reduction (increment) of the fast (slow) electromagnetic wave amplitude but the wave width is increased in both cases. The ranges where the RPT can describe adequately the total magnetic field including different conditions are discussed. (author)
Nonlinear Electromagnetic Waves in a Degenerate Electron-Positron Plasma
El-Labany, S. K.; El-Taibany, W. F.; El-Samahy, A. E.; Hafez, A. M.; Atteya, A.
2015-08-01
Using the reductive perturbation technique (RPT), the nonlinear propagation of magnetosonic solitary waves in an ultracold, degenerate (extremely dense) electron-positron (EP) plasma (containing ultracold, degenerate electron, and positron fluids) is investigated. The set of basic equations is reduced to a Korteweg-de Vries (KdV) equation for the lowest-order perturbed magnetic field and to a KdV type equation for the higher-order perturbed magnetic field. The solutions of these evolution equations are obtained. For better accuracy and searching on new features, the new solutions are analyzed numerically based on compact objects (white dwarf) parameters. It is found that including the higher-order corrections results as a reduction (increment) of the fast (slow) electromagnetic wave amplitude but the wave width is increased in both cases. The ranges where the RPT can describe adequately the total magnetic field including different conditions are discussed.
Associative morphological memories based on variations of the kernel and dual kernel methods.
Sussner, Peter
2003-01-01
Morphological associative memories (MAMs) belong to the class of morphological neural networks. The recording scheme used in the original MAM models is similar to the correlation recording recipe. Recording is achieved by means of a maximum (MXY model) or minimum (WXY model) of outer products. Notable features of autoassociative morphological memories (AMMs) include optimal absolute storage capacity and one-step convergence. Heteroassociative morphological memories (HMMs) do not have these properties and are not very well understood. The fixed points of AMMs can be characterized exactly in terms of the original patterns. Unfortunately, AMM fixed points include a large number of spurious memories. In this paper, we combine the MXX model and variations of the kernel method to produce new autoassociative and heteroassociative memories. We also introduce a dual kernel method. A new, dual model is given by a combination of the WXX model and a variation of the dual kernel method. The new MAM models exhibit better error correction capabilities than MXX and WXX and a reduced number of spurious memories which can be easily described in terms of the fundamental memories.
One-loop effective action of QCD at high temperature using the heat kernel method
Energy Technology Data Exchange (ETDEWEB)
Megias, E. [Universidad de Granada (Spain). Dept. de Fisica Moderna]. E-mail: emegias@ugr.es
2004-07-01
Perturbation theory is an important tool to describe the properties of QCD at very high temperatures. Recently a new technique has been proposed to compute the one-loop effective action of QCD at finite temperature by making a gauge covariant derivative expansion, which is fully consistent with topologically small and large gauge transformations (also time dependent transformations). This technique is based on the heat kernel expansion, and the thermal Wilson line plays an essential role. We consider a general SU(N-c) gauge group. (author)
Classification of maize kernels using NIR hyperspectral imaging.
Williams, Paul J; Kucheryavskiy, Sergey
2016-10-15
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual kernels and did not give acceptable results because of high misclassification. However by using a predefined threshold and classifying entire kernels based on the number of correctly predicted pixels, improved results were achieved (sensitivity and specificity of 0.75 and 0.97). Object-wise classification was performed using two methods for feature extraction - score histograms and mean spectra. The model based on score histograms performed better for hard kernel classification (sensitivity and specificity of 0.93 and 0.97), while that of mean spectra gave better results for medium kernels (sensitivity and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale.
Gaussian kernel width optimization for sparse Bayesian learning.
Mohsenzadeh, Yalda; Sheikhzadeh, Hamid
2015-04-01
Sparse kernel methods have been widely used in regression and classification applications. The performance and the sparsity of these methods are dependent on the appropriate choice of the corresponding kernel functions and their parameters. Typically, the kernel parameters are selected using a cross-validation approach. In this paper, a learning method that is an extension of the relevance vector machine (RVM) is presented. The proposed method can find the optimal values of the kernel parameters during the training procedure. This algorithm uses an expectation-maximization approach for updating kernel parameters as well as other model parameters; therefore, the speed of convergence and computational complexity of the proposed method are the same as the standard RVM. To control the convergence of this fully parameterized model, the optimization with respect to the kernel parameters is performed using a constraint on these parameters. The proposed method is compared with the typical RVM and other competing methods to analyze the performance. The experimental results on the commonly used synthetic data, as well as benchmark data sets, demonstrate the effectiveness of the proposed method in reducing the performance dependency on the initial choice of the kernel parameters.
Training Lp norm multiple kernel learning in the primal.
Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei
2013-10-01
Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method.
Spectrum-based kernel length estimation for Gaussian process classification.
Wang, Liang; Li, Chuan
2014-06-01
Recent studies have shown that Gaussian process (GP) classification, a discriminative supervised learning approach, has achieved competitive performance in real applications compared with most state-of-the-art supervised learning methods. However, the problem of automatic model selection in GP classification, involving the kernel function form and the corresponding parameter values (which are unknown in advance), remains a challenge. To make GP classification a more practical tool, this paper presents a novel spectrum analysis-based approach for model selection by refining the GP kernel function to match the given input data. Specifically, we target the problem of GP kernel length scale estimation. Spectrums are first calculated analytically from the kernel function itself using the autocorrelation theorem as well as being estimated numerically from the training data themselves. Then, the kernel length scale is automatically estimated by equating the two spectrum values, i.e., the kernel function spectrum equals to the estimated training data spectrum. Compared with the classical Bayesian method for kernel length scale estimation via maximizing the marginal likelihood (which is time consuming and could suffer from multiple local optima), extensive experimental results on various data sets show that our proposed method is both efficient and accurate.
Relaxation and diffusion models with non-singular kernels
Sun, HongGuang; Hao, Xiaoxiao; Zhang, Yong; Baleanu, Dumitru
2017-02-01
Anomalous relaxation and diffusion processes have been widely quantified by fractional derivative models, where the definition of the fractional-order derivative remains a historical debate due to its limitation in describing different kinds of non-exponential decays (e.g. stretched exponential decay). Meanwhile, many efforts by mathematicians and engineers have been made to overcome the singularity of power function kernel in its definition. This study first explores physical properties of relaxation and diffusion models where the temporal derivative was defined recently using an exponential kernel. Analytical analysis shows that the Caputo type derivative model with an exponential kernel cannot characterize non-exponential dynamics well-documented in anomalous relaxation and diffusion. A legitimate extension of the previous derivative is then proposed by replacing the exponential kernel with a stretched exponential kernel. Numerical tests show that the Caputo type derivative model with the stretched exponential kernel can describe a much wider range of anomalous diffusion than the exponential kernel, implying the potential applicability of the new derivative in quantifying real-world, anomalous relaxation and diffusion processes.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Tian Yonghong
2010-01-01
Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Widely Linear Complex-Valued Kernel Methods for Regression
Boloix-Tortosa, Rafael; Murillo-Fuentes, Juan Jose; Santos, Irene; Perez-Cruz, Fernando
2017-10-01
Usually, complex-valued RKHS are presented as an straightforward application of the real-valued case. In this paper we prove that this procedure yields a limited solution for regression. We show that another kernel, here denoted as pseudo kernel, is needed to learn any function in complex-valued fields. Accordingly, we derive a novel RKHS to include it, the widely RKHS (WRKHS). When the pseudo-kernel cancels, WRKHS reduces to complex-valued RKHS of previous approaches. We address the kernel and pseudo-kernel design, paying attention to the kernel and the pseudo-kernel being complex-valued. In the experiments included we report remarkable improvements in simple scenarios where real a imaginary parts have different similitude relations for given inputs or cases where real and imaginary parts are correlated. In the context of these novel results we revisit the problem of non-linear channel equalization, to show that the WRKHS helps to design more efficient solutions.
Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.
Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong
2014-01-01
Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Ling-Yu Duan
2010-01-01
Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Inverse Integral Kernel for Diffusion in a Harmonic Potential
Kosugi, Taichi
2014-05-01
The inverse integral kernel for diffusion in a harmonic potential of an overdamped Brownian particle is derived in the present study. It is numerically demonstrated that a sufficiently large number of polynomials for the calculation of the inverse integral kernel are needed for the accurate reproduction of a probability distribution function at past. The inverse integral kernel derived can be used around each of the minima of a generic potential, provided that the lifetimes of the population in the neighboring higher wells are much longer than the negative time lapse.
Non-Rigid Object Tracking by Anisotropic Kernel Mean Shift
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Mean shift, an iterative procedure that shifts each data point to the average of data points in its neighborhood, has been applied to object tracker. However, the traditional mean shift tracker by isotropic kernel often loses the object with the changing object structure in video sequences, especially when the object structure varies fast. This paper proposes a non-rigid object tracker by anisotropic kernel mean shift in which the shape, scale, and orientation of the kernels adapt to the changing object structure. The experimental results show that the new tracker is self-adaptive and approximately twice faster than the traditional tracker, which ensures the robustness and real time of tracking.
Bergman kernel function on Hua construction of the fourth type
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper introduces the Hua construction and presents the holomorphic automorphism group of the Hua construction of the fourth type. Utilizing the Bergman kernel function, under the condition of holomorphic automorphism and the standard complete orthonormal system of the semi-Reinhardt domain, the infinite series form of the Bergman kernel function is derived. By applying the properties of polynomial and Γ functions, various identification relations of the aforementioned form are developed and the explicit formula of the Bergman kernel function for the Hua construction of the fourth type is obtained, which suggest that many of the previously-reported results are only the special cases of our findings.
The Bergman kernel function of some Reinhardt domains (Ⅱ)
Institute of Scientific and Technical Information of China (English)
龚昇; 郑学安
2000-01-01
The boundary behavior of the Bergman kernel function of a kind of Reinhardt domain is studied. Upper and lower bounds for the Bergman kernel function are found at the diagonal points ( Z, Z) Let Q be the Reinhardt domainwhere is the Standard Euclidean norm in and let K( Z, W) be the Bergman kernel function of Ω. Then there exist two positive constants m and M, and a function F such thatholds for every Z∈Ω . Hereand is the defining function of Ω The constants m and M depend only on Ω = This result extends some previous known results.
Explicit signal to noise ratio in reproducing kernel Hilbert spaces
DEFF Research Database (Denmark)
Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo
2011-01-01
This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...
The Bergman Kernels on Generalized Exceptional Hua Domains
Institute of Scientific and Technical Information of China (English)
殷慰萍; 赵振刚
2001-01-01
@@Yin Weiping introduce four types of Hua domain which are built on four types of Cartan domain and the Bergman kernels on these four types of Hua domain can be computed in explicit formulas[1]. In this paper, two types of domains defined by (10), (11) (see below) are introduced which are built on two exceptional Cartan domains. And We compute Bergman Kernels explicitly for these two domains. We also study the asymptotic behavior of the Bergman kernel function near boundary points, drawing on Appell's multivariable hypergeometric function.
Flour quality and kernel hardness connection in winter wheat
Directory of Open Access Journals (Sweden)
Szabó B. P.
2016-12-01
Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.
Linear and kernel methods for multi- and hypervariate change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Canty, Morton J.
2010-01-01
code exists which allows for fast data exploration and experimentation with smaller datasets. Computationally demanding kernelization of test data with training data and kernel image projections have been programmed to run on massively parallel CUDA-enabled graphics processors, when available, giving...... the primal eigenvectors. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric normalization and kernel PCA/MAF/MNF transformations have been written which function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. Also, Matlab...
FUZZY PRINCIPAL COMPONENT ANALYSIS AND ITS KERNEL BASED MODEL
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Principal Component Analysis (PCA) is one of the most important feature extraction methods, and Kernel Principal Component Analysis (KPCA) is a nonlinear extension of PCA based on kernel methods. In real world, each input data may not be fully assigned to one class and it may partially belong to other classes. Based on the theory of fuzzy sets, this paper presents Fuzzy Principal Component Analysis (FPCA) and its nonlinear extension model, i.e., Kernel-based Fuzzy Principal Component Analysis (KFPCA). The experimental results indicate that the proposed algorithms have good performances.
Mercer Kernel Based Fuzzy Clustering Self-Adaptive Algorithm
Institute of Scientific and Technical Information of China (English)
李侃; 刘玉树
2004-01-01
A novel mercer kernel based fuzzy clustering self-adaptive algorithm is presented. The mercer kernel method is introduced to the fuzzy c-means clustering. It may map implicitly the input data into the high-dimensional feature space through the nonlinear transformation. Among other fuzzy c-means and its variants, the number of clusters is first determined. A self-adaptive algorithm is proposed. The number of clusters, which is not given in advance, can be gotten automatically by a validity measure function. Finally, experiments are given to show better performance with the method of kernel based fuzzy c-means self-adaptive algorithm.
Composition Formulas of Bessel-Struve Kernel Function
Directory of Open Access Journals (Sweden)
K. S. Nisar
2016-01-01
Full Text Available The object of this paper is to study and develop the generalized fractional calculus operators involving Appell’s function F3(· due to Marichev-Saigo-Maeda. Here, we establish the generalized fractional calculus formulas involving Bessel-Struve kernel function Sαλz, λ,z∈C to obtain the results in terms of generalized Wright functions. The representations of Bessel-Struve kernel function in terms of exponential function and its relation with Bessel and Struve function are also discussed. The pathway integral representations of Bessel-Struve kernel function are also given in this study.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...
Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C
Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.
Optimizing The Performance of Streaming Numerical Kernels On The IBM Blue Gene/P PowerPC 450
Malas, Tareq
2011-07-01
Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution of partial differential equations, represents a formidable challenge despite the regularity of memory access. Sophisticated optimization techniques beyond the capabilities of modern compilers are required to fully utilize the Central Processing Unit (CPU). The aim of the work presented here is to improve the performance of streaming numerical kernels on high performance architectures by developing efficient algorithms to utilize the vectorized floating point units. The importance of the development time demands the creation of tools to enable simple yet direct development in assembly to utilize the power-efficient cores featuring in-order execution and multiple-issue units. We implement several stencil kernels for a variety of cached memory scenarios using our Python instruction simulation and generation tool. Our technique simplifies the development of efficient assembly code for the IBM Blue Gene/P supercomputer\\'s PowerPC 450. This enables us to perform high-level design, construction, verification, and simulation on a subset of the CPU\\'s instruction set. Our framework has the capability to implement streaming numerical kernels on current and future high performance architectures. Finally, we present several automatically generated implementations, including a 27-point stencil achieving a 1.7x speedup over the best previously published results.
A Monte Carlo algorithm for degenerate plasmas
Energy Technology Data Exchange (ETDEWEB)
Turrell, A.E., E-mail: a.turrell09@imperial.ac.uk; Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Horizon Supertranslation and Degenerate Black Hole Solution
Cai, Rong-Gen; Zhang, Yun-Long
2016-01-01
In this note we first review the degenerate vacua arising from the BMS symmetries. According to the discussion in [1] one can define BMS-analogous supertranslation and superrotation for spacetime with black hole in Gaussian null coordinates. In the leading and subleading orders of near horizon approximation, the infinitely degenerate black hole solutions are derived by considering Einstein equations with or without cosmological constant, and they are related to each other by the diffeomorphism generated by horizon supertranslation. Higher order results and degenerate Rindler horizon solutions also are given in appendices.
Genetics of frontotemporal lobar degeneration
Directory of Open Access Journals (Sweden)
Aswathy P
2010-10-01
Full Text Available Frontotemporal lobar degeneration (FTLD is a highly heterogenous group of progressive neurodegenerative disorders characterized by atrophy of prefrontal and anterior temporal cortices. Recently, the research in the field of FTLD has gained increased attention due to the clinical, neuropathological, and genetic heterogeneity and has increased our understanding of the disease pathogenesis. FTLD is a genetically complex disorder. It has a strong genetic basis and 50% of patients show a positive family history for FTLD. Linkage studies have revealed seven chromosomal loci and a number of genes including MAPT, PGRN, VCP, and CHMB-2B are associated with the disease. Neuropathologically, FTLD is classified into tauopathies and ubiquitinopathies. The vast majority of FTLD cases are characterized by pathological accumulation of tau or TDP-43 positive inclusions, each as an outcome of mutations in MAPT or PGRN, respectively. Identification of novel proteins involved in the pathophysiology of the disease, such as progranulin and TDP-43, may prove to be excellent biomarkers of disease progression and thereby lead to the development of better therapeutic options through pharmacogenomics. However, much more dissections into the causative pathways are needed to get a full picture of the etiology. Over the past decade, advances in research on the genetics of FTLD have revealed many pathogenic mutations leading to different clinical manifestations of the disease. This review discusses the current concepts and recent advances in our understanding of the genetics of FTLD.
Robust Nonlinear Regression: A Greedy Approach Employing Kernels With Application to Image Denoising
Papageorgiou, George; Bouboulis, Pantelis; Theodoridis, Sergios
2017-08-01
We consider the task of robust non-linear regression in the presence of both inlier noise and outliers. Assuming that the unknown non-linear function belongs to a Reproducing Kernel Hilbert Space (RKHS), our goal is to estimate the set of the associated unknown parameters. Due to the presence of outliers, common techniques such as the Kernel Ridge Regression (KRR) or the Support Vector Regression (SVR) turn out to be inadequate. Instead, we employ sparse modeling arguments to explicitly model and estimate the outliers, adopting a greedy approach. The proposed robust scheme, i.e., Kernel Greedy Algorithm for Robust Denoising (KGARD), is inspired by the classical Orthogonal Matching Pursuit (OMP) algorithm. Specifically, the proposed method alternates between a KRR task and an OMP-like selection step. Theoretical results concerning the identification of the outliers are provided. Moreover, KGARD is compared against other cutting edge methods, where its performance is evaluated via a set of experiments with various types of noise. Finally, the proposed robust estimation framework is applied to the task of image denoising, and its enhanced performance in the presence of outliers is demonstrated.
DNA content in embryo and endosperm of maize kernel (Zea mays L.): impact on GMO quantification.
Trifa, Youssef; Zhang, David
2004-03-10
PCR-based techniques are the most widely used methods for the quantification of genetically modified organisms (GMOs) through the determination of the ratio of transgenic DNA to total DNA. It is shown that the DNA content per mass unit is significantly different among 10 maize cultivars. The DNA contents of endosperms, embryos, and teguments of individual kernels from 10 maize cultivars were determined. According to our results, the tegument's DNA ratio reaches at maximum 3.5% of the total kernel's DNA, whereas the endosperm's and the embryo's DNA ratios are nearly equal to 50%. The embryo cells are diploid and made of one paternal and one maternal haploid genome, whereas the endosperm is constituted of triploid cells made of two maternal haploid genomes and one paternal haploid genome. Therefore, it is shown, in this study, that the accuracy of the GMO quantification depends on the reference material used as well as on the category of the transgenic kernels present in the mixture.
Directory of Open Access Journals (Sweden)
Shuo Yang
2015-01-01
Full Text Available Filters of the Spatial-Variant amoeba morphology can preserve edges better, but with too much noise being left. For better denoising, this paper presents a new method to generate structuring elements for Spatially-Variant amoeba morphology. The amoeba kernel in the proposed strategy is divided into two parts: one is the patch distance based amoeba center, and another is the geodesic distance based amoeba boundary, by which the nonlocal patch distance and local geodesic distance are both taken into consideration. Compared to traditional amoeba kernel, the new one has more stable center and its shape can be less influenced by noise in pilot image. What’s more important is that the nonlocal processing approach can induce a couple of adjoint dilation and erosion, and combinations of them can construct adaptive opening, closing, alternating sequential filters, etc. By designing the new amoeba kernel, a family of morphological filters therefore is derived. Finally, this paper presents a series of results on both synthetic and real images along with comparisons with current state-of-the-art techniques, including novel applications to medical image processing and noisy SAR image restoration.
Directory of Open Access Journals (Sweden)
Shanshan Yang
Full Text Available Detection of dysphonia is useful for monitoring the progression of phonatory impairment for patients with Parkinson's disease (PD, and also helps assess the disease severity. This paper describes the statistical pattern analysis methods to study different vocal measurements of sustained phonations. The feature dimension reduction procedure was implemented by using the sequential forward selection (SFS and kernel principal component analysis (KPCA methods. Four selected vocal measures were projected by the KPCA onto the bivariate feature space, in which the class-conditional feature densities can be approximated with the nonparametric kernel density estimation technique. In the vocal pattern classification experiments, Fisher's linear discriminant analysis (FLDA was applied to perform the linear classification of voice records for healthy control subjects and PD patients, and the maximum a posteriori (MAP decision rule and support vector machine (SVM with radial basis function kernels were employed for the nonlinear classification tasks. Based on the KPCA-mapped feature densities, the MAP classifier successfully distinguished 91.8% voice records, with a sensitivity rate of 0.986, a specificity rate of 0.708, and an area value of 0.94 under the receiver operating characteristic (ROC curve. The diagnostic performance provided by the MAP classifier was superior to those of the FLDA and SVM classifiers. In addition, the classification results indicated that gender is insensitive to dysphonia detection, and the sustained phonations of PD patients with minimal functional disability are more difficult to be correctly identified.
Performing edge detection by Difference of Gaussians using q-Gaussian kernels
Assirati, Lucas; Berton, Lilian; Lopes, Alneu de A; Bruno, Odemir M
2013-01-01
In image processing, edge detection is a valuable tool to perform the extraction of features from an image. This detection reduces the amount of information to be processed, since the redundant information (considered less relevant) can be unconsidered. The technique of edge detection consists of determining the points of a digital image whose intensity changes sharply. This changes are due to the discontinuities of the orientation on a surface for example. A well known method of edge detection is the Difference of Gaussians (DoG). The method consists of subtracting two Gaussians, where a kernel has a standard deviation smaller than the previous one. The convolution between the subtraction of kernels and the input image results in the edge detection of this image. This paper introduces a method of extracting edges using DoG with kernels based on the q-Gaussian probability distribution, derived from the q-statistic proposed by Constantino Tsallis. To demonstrate the method's potential, we compare the introduce...
Carvalho, B F; Ávila, C L S; Bernardes, T F; Pereira, M N; Santos, C; Schwan, R F
2017-03-01
The aim of this study was to evaluate the chemical and microbiological characteristics and to identify the lactic acid bacteria (LAB) and yeasts involved in rehydrated corn kernel silage. Four replicates for each fermentation time: 5, 15, 30, 60, 90, 150, 210 and 280 days were prepared. Matrix-assisted laser desorption/ionization time-of-flight mass spectrometry and PCR-based identification were utilized to identify LAB and yeasts. Eighteen bacteria and four yeast species were identified. The bacteria population reached maximum growth after 15 days and moulds were detected up to this time. The highest dry matter (DM) loss was 7·6% after 280 days. The low concentration of water-soluble carbohydrates (20 g kg(-1) of DM) was not limiting for fermentation, although the reduction in pH and acid production occurred slowly. Storage of the rehydrated corn kernel silage increased digestibility up to day 280. This silage was dominated by LAB but showed a slow decrease in pH values. This technique of corn storage on farms increased the DM digestibility. This study was the first to evaluate the rehydrated corn kernel silage fermentation dynamics and our findings are relevant to optimization of this silage fermentation. © 2016 The Society for Applied Microbiology.
Age related macular degeneration and visual disability.
Christoforidis, John B; Tecce, Nicola; Dell'Omo, Roberto; Mastropasqua, Rodolfo; Verolino, Marco; Costagliola, Ciro
2011-02-01
Age-related macular degeneration (AMD) is the leading cause of central blindness or low vision among the elderly in industrialized countries. AMD is caused by a combination of genetic and environmental factors. Among modifiable environmental risk factors, cigarette smoking has been associated with both the dry and wet forms of AMD and may increase the likelihood of worsening pre-existing AMD. Despite advances, the treatment of AMD has limitations and affected patients are often referred for low vision rehabilitation to help them cope with their remaining eyesight. The characteristic visual impairment for both forms of AMD is loss of central vision (central scotoma). This loss results in severe difficulties with reading that may be only partly compensated by magnifying glasses or screen-projection devices. The loss of central vision associated with the disease has a profound impact on patient quality of life. With progressive central visual loss, patients lose their ability to perform the more complex activities of daily living. Common vision aids include low vision filters, magnifiers, telescopes and electronic aids. Low vision rehabilitation (LVR) is a new subspecialty emerging from the traditional fields of ophthalmology, optometry, occupational therapy, and sociology, with an ever-increasing impact on the usual concepts of research, education, and services for visually impaired patients. Relatively few ophthalmologists practise LVR and fewer still routinely use prismatic image relocation (IR) in AMD patients. IR is a method of stabilizing oculomotor functions with the purpose of promoting better function of preferred retinal loci (PRLs). The aim of vision rehabilitation therapy consists in the achievement of techniques designed to improve PRL usage. The use of PRLs to compensate for diseased foveae has offered hope to these patients in regaining some function. However, in a recently published meta-analysis, prism spectacles were found to be unlikely to be of
Automated structural health monitoring based on adaptive kernel spectral clustering
Langone, Rocco; Reynders, Edwin; Mehrkanoon, Siamak; Suykens, Johan A. K.
2017-06-01
Structural health monitoring refers to the process of measuring damage-sensitive variables to assess the functionality of a structure. In principle, vibration data can capture the dynamics of the structure and reveal possible failures, but environmental and operational variability can mask this information. Thus, an effective outlier detection algorithm can be applied only after having performed data normalization (i.e. filtering) to eliminate external influences. Instead, in this article we propose a technique which unifies the data normalization and damage detection steps. The proposed algorithm, called adaptive kernel spectral clustering (AKSC), is initialized and calibrated in a phase when the structure is undamaged. The calibration process is crucial to ensure detection of early damage and minimize the number of false alarms. After the calibration, the method can automatically identify new regimes which may be associated with possible faults. These regimes are discovered by means of two complementary damage (i.e. outlier) indicators. The proposed strategy is validated with a simulated example and with real-life natural frequency data from the Z24 pre-stressed concrete bridge, which was progressively damaged at the end of a one-year monitoring period.
Boosted learned kernels for data-driven vesselness measure
Grisan, E.
2017-03-01
Common vessel centerline extraction methods rely on the computation of a measure providing the likeness of the local appearance of the data to a curvilinear tube-like structure. The most popular techniques rely on empirically designed (hand crafted) measurements as the widely used Hessian vesselness, the recent oriented flux tubeness or filters (e.g. the Gaussian matched filter) that are developed to respond to local features, without exploiting any context information nor the rich structural information embedded in the data. At variance with the previously proposed methods, we propose a completely data-driven approach for learning a vesselness measure from expert-annotated dataset. For each data point (voxel or pixel), we extract the intensity values in a neighborhood region, and estimate the discriminative convolutional kernel yielding a positive response for vessel data and negative response for non-vessel data. The process is iterated within a boosting framework, providing a set of linear filters, whose combined response is the learned vesselness measure. We show the results of the general-use proposed method on the DRIVE retinal images dataset, comparing its performance against the hessian-based vesselness, oriented flux antisymmetry tubeness, and vesselness learned with a probabilistic boosting tree or with a regression tree. We demonstrate the superiority of our approach that yields a vessel detection accuracy of 0.95, with respect to 0.92 (hessian), 0.90 (oriented flux) and 0.85 (boosting tree).
Sugiarto, Bunga; Rizal, Arra'di Nur; Galinium, Maulahikmah; Atmadiputra, Pradana; Rubianto, Melvin; Fahmi, Husni; Sampurno, Tri; Kisworo, Marsudi
2012-01-01
In this paper, we present an implementation of CLNP ground-to-ground packet processing for ATN in Linux kernel version 2.6. We present the big picture of CLNP packet processing, the details of input, routing, and output processing functions, and the implementation of each function based on ISO 8473-1. The functions implemented in this work are PDU header decomposition, header format analysis, header error detection, error reporting, reassembly, source routing, congestion notification, forwarding, composition, segmentation, and transmit to device functions. Each function is initially implemented and tested as a separated loadable kernel module. These modules are successfully loaded into Linux kernel 2.6.
Surface Graphite Degeneration in Ductile Iron Castings for Resin Molds
Institute of Scientific and Technical Information of China (English)
Iulian Riposan; Mihai Chisamera; Stelian Stan; Torbjorn Skaland
2008-01-01
The objective of this paper is to review the factors influencing the formation of degenerated graph-ite layers on the surfaces of ductile iron castings for chemical rosins-acid molding and coro-making systems and how to reduce this defect. In the rosin mold technique the sulphur in the P-toluol sulphonic acid (PTSA),usually used as the hardener, has been identified as one factor causing graphite degeneration at the metal-mold interface. Less than 0.15% S in the mold (or even less than 0.07% S) can reduce the surface layer depth. Oxygen may also have an effect, especially for sulphur containing systems with turbulent flows in the mold, water-bearing no-bake binder systems, Mg-Silica reactions, or dross formation conditions. Despite the lower level of nitrogen in the iron melt after magnesium treatment (less than 90 ppm), nitrogen bearing res-ins have a profound effect on the frequency and severity of surface pinholes, but a limited influence on sur-face graphite degeneration.
Saponin inventory from Argania spinosa kernel cakes by liquid chromatography and mass spectrometry.
Henry, Max; Kowalczyk, Mariusz; Maldini, Mariateresa; Piacente, Sonia; Stochmal, Anna; Oleszek, Wiesław
2013-01-01
Argania spinosa kernel cakes, obtained from argan oil extraction process, are known to contain large amounts of saponins. Only a few have been characterised previously, due to the use of pure ethanol as extracting solvent. The use of aqueous 50% ethanol improved the extraction of more polar saponins. Identification of polar saponins in kernel cakes of Argania spinosa by liquid chromatography-mass spectrometry and NMR techniques. Defatted kernel cakes were first extracted with ethanol and then twice with 50% aqueous ethanol. Individual crude extracts were analysed with an ion-trap mass spectrometer in negative mode electrospray MS and MS/MS modes. NMR experiments were run under standard conditions at 300 K on a Bruker DRX-600 spectrometer. The LC-MS base peak chromatogram of saponins from pure ethanol extract was dominated by 11 large and several small peaks but the UV chromatogram showed only two peaks, corresponding to the main neutral saponins found previously in Argania: arganine A and B. In 50% aqueous ethanol extracts, numerous other saponins were detected. Many of them were glucuronide oleanane-type triterpene carboxylic acid 3,28-O-bidesmosides (GOTCAB saponins). The assignments of (1) H- and (13) C-NMR spectra of the four most abundant GOTCAB saponins confirmed the MS results. Four GOTCAB saponins were structurally identified by NMR analysis in the 50% aqueous ethanol extract. Furthermore, LC-MS analyses showed the presence of at least 19 additional polar saponins in these kernel cakes. Copyright © 2013 John Wiley & Sons, Ltd.
Bioconversion of palm kernel meal for aquaculture: Experiences ...
African Journals Online (AJOL)
SERVER
2008-04-17
Apr 17, 2008 ... countries where so much agro-industry by-products exist such as palm kernel meal, .... as basic ingredients for margarine production, confectionery, animal ..... Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia.
Linear and kernel methods for multivariate change detection
DEFF Research Database (Denmark)
Canty, Morton J.; Nielsen, Allan Aasbjerg
2012-01-01
), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...
OSCILLATORY SINGULAR INTEGRALS WITH VARIABLE ROUGH KERNEL, Ⅱ
Institute of Scientific and Technical Information of China (English)
Tang Lin; Yang Dachun
2003-01-01
Let n≥2. In this paper, the author establishes the L2(Rn)-boundedness of some oscillatory singular inte-grals with variable rough kernels by means of some estimates on hypergeometric functions and confluent hy-pergeometric funtions.
A Security Kernel Architecture Based Trusted Computing Platform
Institute of Scientific and Technical Information of China (English)
CHEN You-lei; SHEN Chang-xiang
2005-01-01
A security kernel architecture built on trusted computing platform in the light of thinking about trusted computing is presented. According to this architecture, a new security module TCB (Trusted Computing Base) is added to the operation system kernel and two operation interface modes are provided for the sake of self-protection. The security kernel is divided into two parts and trusted mechanism is separated from security functionality. The TCB module implements the trusted mechanism such as measurement and attestation,while the other components of security kernel provide security functionality based on these mechanisms. This architecture takes full advantage of functions provided by trusted platform and clearly defines the security perimeter of TCB so as to assure self-security from architectural vision. We also present function description of TCB and discuss the strengths and limitations comparing with other related researches.
A Thermodynamic Model for Argon Plasma Kernel Formation
Directory of Open Access Journals (Sweden)
James Keck
2010-11-01
Full Text Available Plasma kernel formation of argon is studied experimentally and theoretically. The experiments have been performed in a constant volume cylindrical vessel located in a shadowgraph system. The experiments have been done in constant pressure. The energy of plasma is supplied by an ignition system through two electrodes located in the vessel. The experiments have been done with two different spark energies to study the effect of input energy on kernel growth and its properties. A thermodynamic model employing mass and energy balance was developed to predict the experimental data. The agreement between experiments and model prediction is very good. The effect of various parameters such as initial temperature, initial radius of the kernel, and the radiation energy loss have been investigated and it has been concluded that initial condition is very important on formation and expansion of the kernel.