DEFF Research Database (Denmark)
Gimperlein, Heiko; Grubb, Gerd
2014-01-01
The purpose of this article is to establish upper and lower estimates for the integral kernel of the semigroup exp(−t P) associated to a classical, strongly elliptic pseudodifferential operator P of positive order on a closed manifold. The Poissonian bounds generalize those obtained for perturbat......The purpose of this article is to establish upper and lower estimates for the integral kernel of the semigroup exp(−t P) associated to a classical, strongly elliptic pseudodifferential operator P of positive order on a closed manifold. The Poissonian bounds generalize those obtained...... for perturbations of fractional powers of the Laplacian. In the selfadjoint case, extensions to t∈C+ are studied. In particular, our results apply to the Dirichlet-to-Neumann semigroup....
Directory of Open Access Journals (Sweden)
Senyue Zhang
2016-01-01
Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.
Heat Kernel Asymptotics of Zaremba Boundary Value Problem
Energy Technology Data Exchange (ETDEWEB)
Avramidi, Ivan G. [Department of Mathematics, New Mexico Institute of Mining and Technology (United States)], E-mail: iavramid@nmt.edu
2004-03-15
The Zaremba boundary-value problem is a boundary value problem for Laplace-type second-order partial differential operators acting on smooth sections of a vector bundle over a smooth compact Riemannian manifold with smooth boundary but with discontinuous boundary conditions, which include Dirichlet boundary conditions on one part of the boundary and Neumann boundary conditions on another part of the boundary. We study the heat kernel asymptotics of Zaremba boundary value problem. The construction of the asymptotic solution of the heat equation is described in detail and the heat kernel is computed explicitly in the leading approximation. Some of the first nontrivial coefficients of the heat kernel asymptotic expansion are computed explicitly.
Heat kernel analysis for Bessel operators on symmetric cones
DEFF Research Database (Denmark)
Möllers, Jan
2014-01-01
. The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...
Heat kernels and zeta functions on fractals
International Nuclear Information System (INIS)
Dunne, Gerald V
2012-01-01
On fractals, spectral functions such as heat kernels and zeta functions exhibit novel features, very different from their behaviour on regular smooth manifolds, and these can have important physical consequences for both classical and quantum physics in systems having fractal properties. This article is part of a special issue of Journal of Physics A: Mathematical and Theoretical in honour of Stuart Dowker's 75th birthday devoted to ‘Applications of zeta functions and other spectral functions in mathematics and physics’. (paper)
Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM
Directory of Open Access Journals (Sweden)
Chenchao Zhao
2018-01-01
Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.
The asymptotic behaviour of the heat equation in a twisted Dirichlet-Neumann waveguide
Czech Academy of Sciences Publication Activity Database
Krejčiřík, David; Zuazua, E.
2011-01-01
Roč. 250, č. 5 (2011), s. 2334-2346 ISSN 0022-0396 R&D Projects: GA MŠk LC06002 Institutional research plan: CEZ:AV0Z10480505 Keywords : Laplacian * Dirichlet and Neumann boundary conditions * Twist Subject RIV: BE - Theoretical Physics Impact factor: 1.277, year: 2011
Yan, Yan
2015-01-01
We study a new optimization scheme that generates smooth and robust solutions for Dirichlet velocity boundary control (DVBC) of conjugate heat transfer (CHT) processes. The solutions to the DVBC of the incompressible Navier-Stokes equations are typically nonsmooth, due to the regularity degradation of the boundary stress in the adjoint Navier-Stokes equations. This nonsmoothness is inherited by the solutions to the DVBC of CHT processes, since the CHT process couples the Navier-Stokes equations of fluid motion with the convection-diffusion equations of fluid-solid thermal interaction. Our objective in the CHT boundary control problem is to select optimally the fluid inflow profile that minimizes an objective function that involves the sum of the mismatch between the temperature distribution in the fluid system and a prescribed temperature profile and the cost of the control.Our strategy to resolve the nonsmoothness of the boundary control solution is based on two features, namely, the objective function with a regularization term on the gradient of the control profile on both the continuous and the discrete levels, and the optimization scheme with either explicit or implicit smoothing effects, such as the smoothed Steepest Descent and the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) methods. Our strategy to achieve the robustness of the solution process is based on combining the smoothed optimization scheme with the numerical continuation technique on the regularization parameters in the objective function. In the section of numerical studies, we present two suites of experiments. In the first one, we demonstrate the feasibility and effectiveness of our numerical schemes in recovering the boundary control profile of the standard case of a Poiseuille flow. In the second one, we illustrate the robustness of our optimization schemes via solving more challenging DVBC problems for both the channel flow and the flow past a square cylinder, which use initial
Modelling microwave heating of discrete samples of oil palm kernels
International Nuclear Information System (INIS)
Law, M.C.; Liew, E.L.; Chang, S.L.; Chan, Y.S.; Leo, C.P.
2016-01-01
Highlights: • Microwave (MW) drying of oil palm kernels is experimentally determined and modelled. • MW heating of discrete samples of oil palm kernels (OPKs) is simulated. • OPK heating is due to contact effect, MW interference and heat transfer mechanisms. • Electric field vectors circulate within OPKs sample. • Loosely-packed arrangement improves temperature uniformity of OPKs. - Abstract: Recently, microwave (MW) pre-treatment of fresh palm fruits has showed to be environmentally friendly compared to the existing oil palm milling process as it eliminates the condensate production of palm oil mill effluent (POME) in the sterilization process. Moreover, MW-treated oil palm fruits (OPF) also possess better oil quality. In this work, the MW drying kinetic of the oil palm kernels (OPK) was determined experimentally. Microwave heating/drying of oil palm kernels was modelled and validated. The simulation results show that temperature of an OPK is not the same over the entire surface due to constructive and destructive interferences of MW irradiance. The volume-averaged temperature of an OPK is higher than its surface temperature by 3–7 °C, depending on the MW input power. This implies that point measurement of temperature reading is inadequate to determine the temperature history of the OPK during the microwave heating process. The simulation results also show that arrangement of OPKs in a MW cavity affects the kernel temperature profile. The heating of OPKs were identified to be affected by factors such as local electric field intensity due to MW absorption, refraction, interference, the contact effect between kernels and also heat transfer mechanisms. The thermal gradient patterns of OPKs change as the heating continues. The cracking of OPKs is expected to occur first in the core of the kernel and then it propagates to the kernel surface. The model indicates that drying of OPKs is a much slower process compared to its MW heating. The model is useful
The heat kernel as the pagerank of a graph
Chung, Fan
2007-01-01
The concept of pagerank was first started as a way for determining the ranking of Web pages by Web search engines. Based on relations in interconnected networks, pagerank has become a major tool for addressing fundamental problems arising in general graphs, especially for large information networks with hundreds of thousands of nodes. A notable notion of pagerank, introduced by Brin and Page and denoted by PageRank, is based on random walks as a geometric sum. In this paper, we consider a notion of pagerank that is based on the (discrete) heat kernel and can be expressed as an exponential sum of random walks. The heat kernel satisfies the heat equation and can be used to analyze many useful properties of random walks in a graph. A local Cheeger inequality is established, which implies that, by focusing on cuts determined by linear orderings of vertices using the heat kernel pageranks, the resulting partition is within a quadratic factor of the optimum. This is true, even if we restrict the volume of the small part separated by the cut to be close to some specified target value. This leads to a graph partitioning algorithm for which the running time is proportional to the size of the targeted volume (instead of the size of the whole graph).
Hamilton's gradient estimate for the heat kernel on complete manifolds
Kotschwar, Brett
2007-01-01
In this paper we extend a gradient estimate of R. Hamilton for positive solutions to the heat equation on closed manifolds to bounded positive solutions on complete, non-compact manifolds with $Rc \\geq -Kg$. We accomplish this extension via a maximum principle of L. Karp and P. Li and a Bernstein-type estimate on the gradient of the solution. An application of our result, together with the bounds of P. Li and S.T. Yau, yields an estimate on the gradient of the heat kernel for complete manifol...
Observing integrals of heat kernels from a distance
DEFF Research Database (Denmark)
Heat kernels have integrals such as Brownian motion mean exit time, potential capacity, and torsional rigidity. We show how to obtain bounds on these values - essentially by observing their behaviour in terms of the distance function from a point and then comparing with corresponding values in ta...... and discussed as test cases. The talk is based on joint work with Vicente Palmer....... in tailor-made warped product spaces. The results will be illustrated by applications to the so-called 'type' problem: How to decide if a given manifold or surface is transient (hyperbolic) or recurrent (parabolic). Specific examples of minimal surfaces and constant pressure dry foams will be shown...
A multi-resolution approach to heat kernels on discrete surfaces
Vaxman, Amir
2010-07-26
Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.
Heat kernel expansion in the background field formalism
Barvinsky, Andrei
2015-01-01
Heat kernel expansion and background field formalism represent the combination of two calculational methods within the functional approach to quantum field theory. This approach implies construction of generating functionals for matrix elements and expectation values of physical observables. These are functionals of arbitrary external sources or the mean field of a generic configuration -- the background field. Exact calculation of quantum effects on a generic background is impossible. However, a special integral (proper time) representation for the Green's function of the wave operator -- the propagator of the theory -- and its expansion in the ultraviolet and infrared limits of respectively short and late proper time parameter allow one to construct approximations which are valid on generic background fields. Current progress of quantum field theory, its renormalization properties, model building in unification of fundamental physical interactions and QFT applications in high energy physics, gravitation and...
The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels
International Nuclear Information System (INIS)
Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi
2015-01-01
The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)
Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K
2015-05-01
We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.
The multiple Dirichlet product and the multiple Dirichlet series
Onozuka, Tomokazu
2016-01-01
First, we define the multiple Dirichlet product and study the properties of it. From those properties, we obtain a zero-free region of a multiple Dirichlet series and a multiple Dirichlet series expression of the reciprocal of a multiple Dirichlet series.
Directory of Open Access Journals (Sweden)
Yang Xiao-Jun
2016-01-01
Full Text Available In this article we propose a new fractional derivative without singular kernel. We consider the potential application for modeling the steady heat-conduction problem. The analytical solution of the fractional-order heat flow is also obtained by means of the Laplace transform.
Institute of Scientific and Technical Information of China (English)
E.M.E. ZAYED
2004-01-01
The asymptotic expansion of the heat kernel Θ(t)(∞∑=(i=0))exp (-λi) where({λi}∞i=1) Are the eigen-values of negative Laplacian( -△n=-n∑k=1(θ/θxk)2)in Rn(n=2 or 3) is studied for short-time t for a general bounded domainθΩwith a smooth boundary θΩ.In this paper, we consider the case of a finite number of the Dirichlet conditions φ=0 on Γi (i = J +1,….,J)and the Neumann conditions and (θφ/θ vi) = 0 on Γi (i = J+1,…,k) and the Robin condition (θφ/θ vi+γi) θ=(I=k+1,… m) where γi are piecewise smooth positive impedancem(θφ=mUi=1Γi. )We construct the required asymptotics in the form of a power series over t. The senior coe.cients inthis series are speci.ed as functionals of the geometric shape of the domain Ω.This result is applied to calculatethe one-particle partition function of a "special ideal gas", i.e., the set of non-interacting particles set up in abox with Dirichlet, Neumann and Robin boundary conditions for the appropriate wave function. Calculationof the thermodynamic quantities for the ideal gas such as the internal energy, pressure and speci.c heat revealsthat these quantities alone are incapable of distinguishing between two di.erent shapes of the domain. Thisconclusion seems to be intuitively clear because it is based on a limited information given by a one-particlepartition function; nevertheless, its formal theoretical motivation is of some interest.
A heat kernel proof of the index theorem for deformation quantization
Karabegov, Alexander
2017-11-01
We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kähler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.
A heat kernel proof of the index theorem for deformation quantization
Karabegov, Alexander
2017-01-01
We give a heat kernel proof of the algebraic index theorem for deformation quantization with separation of variables on a pseudo-Kahler manifold. We use normalizations of the canonical trace density of a star product and of the characteristic classes involved in the index formula for which this formula contains no extra constant factors.
A multi-resolution approach to heat kernels on discrete surfaces
Vaxman, Amir; Ben-Chen, Mirela; Gotsman, Craig
2010-01-01
process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel
Free energy on a cycle graph and trigonometric deformation of heat kernel traces on odd spheres
Kan, Nahomi; Shiraishi, Kiyoshi
2018-01-01
We consider a possible ‘deformation’ of the trace of the heat kernel on odd dimensional spheres, motivated by the calculation of the free energy of a scalar field on a discretized circle. By using an expansion in terms of the modified Bessel functions, we obtain the values of the free energies after a suitable regularization.
Harmonic Function of Poincare Cone Condition In Solving Dirichlet ...
African Journals Online (AJOL)
This paper describes the set of harmonic functions on a given open set U which can be seen as the kernel of the Laplace operator and is therefore a vector space over R .It also reviews the harmonic theorem, the dirichlet problem and maximum principle where we conclude that the application of sums , differences and ...
Some results from a Mellin transform expansion for the heat Kernel
International Nuclear Information System (INIS)
Malbouisson, A.P.C.; Simao, F.R.A.; Camargo Filho, A.F. de.
1988-01-01
The coefficients of a new Heat Kernel expansion, in the case of a differential operator containing a gauge field. The meromorphic structure of the generalized zeta-function obtained by that expansion is compared with the one obtained in a proceeding paper. The expansion is applied to anomalies, obtaining a general formula for arbitrary dimension D. The special cases D=2 and D=3 are investigated. (author) [pt
Calculation of heat-kernel coefficients and usage of computer algebra
International Nuclear Information System (INIS)
Bel'kov, A.A.; Lanev, A.V.; Schaale, A.
1995-01-01
The calculation of heat-kernel coefficients with the classical De Witt algorithm has been discussed. We present the explicit form of the coefficients up to h 5 in the general case and up to h 7 min for the minimal parts. The results are compared with the expressions in other papers. A method to optimize the usage of memory for working with large expressions on universal computer algebra systems has been proposed. 20 refs
One loop partition function of six dimensional conformal gravity using heat kernel on AdS
Energy Technology Data Exchange (ETDEWEB)
Lovreković, Iva [Institute for Theoretical Physics, Technische Universität Wien,Wiedner Hauptstrasse 8-10/136, A-1040 Vienna (Austria)
2016-10-13
We compute the heat kernel for the Laplacians of symmetric transverse traceless fields of arbitrary spin on the AdS background in even number of dimensions using the group theoretic approach introduced in http://dx.doi.org/10.1007/JHEP11(2011)010 and apply it on the partition function of six dimensional conformal gravity. The obtained partition function consists of the Einstein gravity, conformal ghost and two modes that contain mass.
International Nuclear Information System (INIS)
Carroll, S.M.; Trodden, M.
1998-01-01
We propose a class of field theories featuring solitonic solutions in which topological defects can end when they intersect other defects of equal or higher dimensionality. Such configurations may be termed open-quotes Dirichlet topological defects,close quotes in analogy with the D-branes of string theory. Our discussion focuses on defects in scalar field theories with either gauge or global symmetries, in 3+1 dimensions; the types of defects considered include walls ending on walls, strings on walls, and strings on strings. copyright 1998 The American Physical Society
Oskoueian, Ehsan; Abdullah, Norhani; Idrus, Zulkifli; Ebrahimi, Mahdi; Goh, Yong Meng; Shakeri, Majid; Oskoueian, Armin
2014-10-02
Palm kernel cake (PKC), the most abundant by-product of oil palm industry is believed to contain bioactive compounds with hepatoprotective potential. These compounds may serve as hepatoprotective agents which could help the poultry industry to alleviate adverse effects of heat stress on liver function in chickens. This study was performed to evaluate the hepatoprotective potential of PKC extract in heat-induced oxidative stress in chicken hepatocytes. The nature of the active metabolites and elucidation of the possible mechanism involved were also investigated. The PKC extract possessed free radical scavenging activity with values significantly (p < 0.05) lower than silymarin as the reference antioxidant. Heat-induced oxidative stress in chicken hepatocyte impaired the total protein, lipid peroxidation and antioxidant enzymes activity significantly (p < 0.05). Treatment of heat-induced hepatocytes with PKC extract (125 μg/ml) and silymarin as positive control increased these values significantly (p < 0.05). The real time PCR and western blot analyses revealed the significant (p < 0.05) up-regulation of oxidative stress biomarkers including TNF-like, IFN-γ and IL-1β genes; NF-κB, COX-2, iNOS and Hsp70 proteins expression upon heat stress in chicken hepatocytes. The PKC extract and silymarin were able to alleviate the expression of all of these biomarkers in heat-induced chicken hepatocytes. The gas chromatography-mass spectrometry analysis of PKC extract showed the presence of fatty acids, phenolic compounds, sugar derivatives and other organic compounds such as furfural which could be responsible for the observed hepatoprotective activity. Palm kernel cake extract could be a potential agent to protect hepatocytes function under heat induced oxidative stress.
Heat damage and in vitro starch digestibility of puffed wheat kernels.
Cattaneo, Stefano; Hidalgo, Alyssa; Masotti, Fabio; Stuknytė, Milda; Brandolini, Andrea; De Noni, Ivano
2015-12-01
The effect of processing conditions on heat damage, starch digestibility, release of advanced glycation end products (AGEs) and antioxidant capacity of puffed cereals was studied. The determination of several markers arising from Maillard reaction proved pyrraline (PYR) and hydroxymethylfurfural (HMF) as the most reliable indices of heat load applied during puffing. The considerable heat load was evidenced by the high levels of both PYR (57.6-153.4 mg kg(-1) dry matter) and HMF (13-51.2 mg kg(-1) dry matter). For cost and simplicity, HMF looked like the most appropriate index in puffed cereals. Puffing influenced starch in vitro digestibility, being most of the starch (81-93%) hydrolyzed to maltotriose, maltose and glucose whereas only limited amounts of AGEs were released. The relevant antioxidant capacity revealed by digested puffed kernels can be ascribed to both the new formed Maillard reaction products and the conditions adopted during in vitro digestion. Copyright © 2015 Elsevier Ltd. All rights reserved.
Estimation of Poisson-Dirichlet Parameters with Monotone Missing Data
Directory of Open Access Journals (Sweden)
Xueqin Zhou
2017-01-01
Full Text Available This article considers the estimation of the unknown numerical parameters and the density of the base measure in a Poisson-Dirichlet process prior with grouped monotone missing data. The numerical parameters are estimated by the method of maximum likelihood estimates and the density function is estimated by kernel method. A set of simulations was conducted, which shows that the estimates perform well.
The Dirichlet-to-Robin Transform
Bondurant, J D
2004-01-01
A simple transformation converts a solution of a partial differential equation with a Dirichlet boundary condition to a function satisfying a Robin (generalized Neumann) condition. In the simplest cases this observation enables the exact construction of the Green functions for the wave, heat, and Schrodinger problems with a Robin boundary condition. The resulting physical picture is that the field can exchange energy with the boundary, and a delayed reflection from the boundary results. In more general situations the method allows at least approximate and local construction of the appropriate reflected solutions, and hence a "classical path" analysis of the Green functions and the associated spectral information. By this method we solve the wave equation on an interval with one Robin and one Dirichlet endpoint, and thence derive several variants of a Gutzwiller-type expansion for the density of eigenvalues. The variants are consistent except for an interesting subtlety of distributional convergence that affec...
Directory of Open Access Journals (Sweden)
A. Dauda
2017-02-01
Full Text Available This study investigated the effect of moisture content on the physical properties and specific heat capacity of Neem (Azadirachta Indica A. Juss nut kernels. The major, intermediate and minor axial dimensions of the kernels increased from 1.04 to 1.23cm, 0.42 to 0.6cm, and 0.32 to 0.45cm respectively, as the moisture content increased from 5.2 to 44.9 % (db. The arithmetic and geometric mean diameters determined at the same moisture level were significantly different from each other, with the arithmetic mean diameter being higher. In the above moisture range, one thousand kernel weight, true density, porosity, sphericity, roundness and surface area all increased linearly from 0.0987 to 0.1755kg, 632 to 733kgm-3, 6.42 to 32.14%, 41.3 to 47.5%, 22 to 36% and 13 to 24cm2 respectively, while bulk density decreased from 591.4 to 497.4kgm-3 with increase in moisture content. Angle of repose increased from 21.22 to 29.8o with increase in moisture content. The Static coefficient of friction on ply wood with grains parallel to the direction of movement ranged from 0.41 to 0.61, it ranged from 0.19 to 0.24 on on fiber glass, 0.28 to .038 on hessian bag material and 0.25 to 0.33 on galvanized steel sheet. The specific heat of the seed varied from 2738.1- 4345.4J/kg/oC in the above moisture range.
Wang, Gang; Wang, Yalin
2017-02-15
In this paper, we propose a heat kernel based regional shape descriptor that may be capable of better exploiting volumetric morphological information than other available methods, thereby improving statistical power on brain magnetic resonance imaging (MRI) analysis. The mechanism of our analysis is driven by the graph spectrum and the heat kernel theory, to capture the volumetric geometry information in the constructed tetrahedral meshes. In order to capture profound brain grey matter shape changes, we first use the volumetric Laplace-Beltrami operator to determine the point pair correspondence between white-grey matter and CSF-grey matter boundary surfaces by computing the streamlines in a tetrahedral mesh. Secondly, we propose multi-scale grey matter morphology signatures to describe the transition probability by random walk between the point pairs, which reflects the inherent geometric characteristics. Thirdly, a point distribution model is applied to reduce the dimensionality of the grey matter morphology signatures and generate the internal structure features. With the sparse linear discriminant analysis, we select a concise morphology feature set with improved classification accuracies. In our experiments, the proposed work outperformed the cortical thickness features computed by FreeSurfer software in the classification of Alzheimer's disease and its prodromal stage, i.e., mild cognitive impairment, on publicly available data from the Alzheimer's Disease Neuroimaging Initiative. The multi-scale and physics based volumetric structure feature may bring stronger statistical power than some traditional methods for MRI-based grey matter morphology analysis. Copyright © 2016 Elsevier Inc. All rights reserved.
Dirichlet expression for L(1, χ )
Indian Academy of Sciences (India)
We show that this expression with obvious modification is valid for the general primitive Dirichlet character χ. Keywords. Hurwitz zeta function; Dirichlet character; Dirichlet L-series; primitive character. 1. Introduction. In Dirichlet's famous work dealing with class number formula, the value of L(1,χ) is expressed in terms of finite ...
Diophantine approximation and Dirichlet series
Queffélec, Hervé
2013-01-01
This self-contained book will benefit beginners as well as researchers. It is devoted to Diophantine approximation, the analytic theory of Dirichlet series, and some connections between these two domains, which often occur through the Kronecker approximation theorem. Accordingly, the book is divided into seven chapters, the first three of which present tools from commutative harmonic analysis, including a sharp form of the uncertainty principle, ergodic theory and Diophantine approximation to be used in the sequel. A presentation of continued fraction expansions, including the mixing property of the Gauss map, is given. Chapters four and five present the general theory of Dirichlet series, with classes of examples connected to continued fractions, the famous Bohr point of view, and then the use of random Dirichlet series to produce non-trivial extremal examples, including sharp forms of the Bohnenblust-Hille theorem. Chapter six deals with Hardy-Dirichlet spaces, which are new and useful Banach spaces of anal...
Heat kernel expansion for fermionic billiards in an external magnetic field
International Nuclear Information System (INIS)
Antoine, M.; Comtet, A.; Knecht, M.
1989-05-01
Using Seeley's heat kernel expansion, we compute the asymptotic density of states of the Dirac operator coupled to a magnetic field on a two dimensional manifold with boundary (fermionic billiard). Local boundary conditions compatible with vector current conservation depend on a free parameter α. It is shown that the perimeter correction identically vanishes for α = 0. In that case, the next order constant term is found to be proportional to the Euler characteristic of the manifold. These results are independent of the external magnetic field and of the shape of the billiard, provided the boundary is sufficiently smooth. For the flat circular billiard, the constant term is found to be - 1/12, in agreement with a numerical result by M.V. BERRY and R.J. MONDRAGON (1987)
New directions in Dirichlet forms
Jost, Jürgen; Mosco, Umberto; Rockner, Michael; Sturm, Karl-Theodor
1998-01-01
The theory of Dirichlet forms brings together methods and insights from the calculus of variations, stochastic analysis, partial differential and difference equations, potential theory, Riemannian geometry and more. This book features contributions by leading experts and provides up-to-date, authoritative accounts on exciting developments in the field and on new research perspectives. Topics covered include the following: stochastic analysis on configuration spaces, specifically a mathematically rigorous approach to the stochastic dynamics of Gibbs measures and infinite interacting particle systems; subelliptic PDE, homogenization, and fractals; geometric aspects of Dirichlet forms on metric spaces and function theory on such spaces; generalized harmonic maps as nonlinear analogues of Dirichlet forms, with an emphasis on non-locally compact situations; and a stochastic approach based on Brownian motion to harmonic maps and their regularity. Various new connections between the topics are featured, and it is de...
On the Dirichlet's Box Principle
Poon, Kin-Keung; Shiu, Wai-Chee
2008-01-01
In this note, we will focus on several applications on the Dirichlet's box principle in Discrete Mathematics lesson and number theory lesson. In addition, the main result is an innovative game on a triangular board developed by the authors. The game has been used in teaching and learning mathematics in Discrete Mathematics and some high schools in…
Dirichlet polynomials, majorization, and trumping
International Nuclear Information System (INIS)
Pereira, Rajesh; Plosker, Sarah
2013-01-01
Majorization and trumping are two partial orders which have proved useful in quantum information theory. We show some relations between these two partial orders and generalized Dirichlet polynomials, Mellin transforms, and completely monotone functions. These relations are used to prove a succinct generalization of Turgut’s characterization of trumping. (paper)
Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies
Energy Technology Data Exchange (ETDEWEB)
Groh, Kai
2012-10-15
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement
Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies
International Nuclear Information System (INIS)
Groh, Kai
2012-10-01
The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of
Lévy matters VI Lévy-type processes moments, construction and heat kernel estimates
Kühn, Franziska
2017-01-01
Presenting some recent results on the construction and the moments of Lévy-type processes, the focus of this volume is on a new existence theorem, which is proved using a parametrix construction. Applications range from heat kernel estimates for a class of Lévy-type processes to existence and uniqueness theorems for Lévy-driven stochastic differential equations with Hölder continuous coefficients. Moreover, necessary and sufficient conditions for the existence of moments of Lévy-type processes are studied and some estimates on moments are derived. Lévy-type processes behave locally like Lévy processes but, in contrast to Lévy processes, they are not homogeneous in space. Typical examples are processes with varying index of stability and solutions of Lévy-driven stochastic differential equations. This is the sixth volume in a subseries of the Lecture Notes in Mathematics called Lévy Matters. Each volume describes a number of important topics in the theory or applicati ons of Lévy processes and pays ...
Supersymmetry of noncompact MQCD-like membrane instantons and heat kernel asymptotics
International Nuclear Information System (INIS)
Belani, Kanishka; Kaura, Payal; Misra, Aalok
2006-01-01
We perform a heat kernel asymptotics analysis of the nonperturbative superpotential obtained from wrapping of an M2-brane around a supersymmetric noncompact three-fold embedded in a (noncompact) G 2 -manifold as obtained, the three-fold being the one relevant to domain walls in Witten's MQCD, in the limit of small 'ζ', a complex constant that appears in the Riemann surfaces relevant to defining the boundary conditions for the domain wall in MQCD. The MQCD-like configuration is interpretable, for small but non-zero ζ as a noncompact/'large open membrane instanton, and for vanishing ζ, as the type IIA D0-brane (for vanishing M-theory circle radius). We find that the eta-function Seeley de-Witt coefficients vanish, and we get a perfect match between the zeta-function Seeley de-Witt coefficients (up to terms quadratic in ζ) between the Dirac-type operator and one of the two Laplace-type operators figuring in the superpotential. Given the dissimilar forms of the bosonic and the square of the fermionic operators, this is an extremely nontrivial check, from a spectral analysis point of view, of the expected residual supersymmetry for the nonperturbative configurations in M-theory considered in this work
Learning for Nonstationary Dirichlet Processes
Czech Academy of Sciences Publication Activity Database
Quinn, A.; Kárný, Miroslav
2007-01-01
Roč. 21, č. 10 (2007), s. 827-855 ISSN 0890-6327 R&D Projects: GA AV ČR 1ET100750401 Grant - others:MŠk ČR(CZ) 2C06001 Program:2C Institutional research plan: CEZ:AV0Z10750506 Keywords : Nestacionární procesy * učení * Dirichletovy procesy * zapomínání Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.776, year: 2007 http://library.utia.cas.cz/separaty/2007/as/karny- learning for nonstationary dirichlet processes.pdf
Solution of the Dirichlet Problem for the Poisson's Equation in a Multidimensional Infinite Layer
Directory of Open Access Journals (Sweden)
O. D. Algazin
2015-01-01
Full Text Available The paper considers the multidimensional Poisson equation in the domain bounded by two parallel hyperplanes (in the multidimensional infinite layer. For an n-dimensional half-space method of solving boundary value problems for linear partial differential equations with constant coefficients is a Fourier transform to the variables in the boundary hyperplane. The same method can be used for an infinite layer, as is done in this paper in the case of the Dirichlet problem for the Poisson equation. For strip and infinite layer in three-dimensional space the solutions of this problem are known. And in the three-dimensional case Green's function is written as an infinite series. In this paper, the solution is obtained in the integral form and kernels of integrals are expressed in a finite form in terms of elementary functions and Bessel functions. A recurrence relation between the kernels of integrals for n-dimensional and (n + 2 -dimensional layers was obtained. In particular, is built the Green's function of the Laplace operator for the Dirichlet problem, through which the solution of the problem is recorded. Even in three-dimensional case we obtained new formula compared to the known. It is shown that the kernel of the integral representation of the solution of the Dirichlet problem for a homogeneous Poisson equation (Laplace equation is an approximate identity (δ-shaped system of functions. Therefore, if the boundary values are generalized functions of slow growth, the solution of the Dirichlet problem for the homogeneous equation (Laplace is written as a convolution of kernels with these functions.
International Nuclear Information System (INIS)
Parker, Leonard; Vanzella, Daniel A.T.
2004-01-01
We investigate the possibility that the late acceleration observed in the rate of expansion of the Universe is due to vacuum quantum effects arising in curved spacetime. The theoretical basis of the vacuum cold dark matter (VCDM), or vacuum metamorphosis, cosmological model of Parker and Raval is reexamined and improved. We show, by means of a manifestly nonperturbative approach, how the infrared behavior of the propagator (related to the large-time asymptotic form of the heat kernel) of a free scalar field in curved spacetime leads to nonperturbative terms in the effective action similar to those appearing in the earlier version of the VCDM model. The asymptotic form that we adopt for the propagator or heat kernel at large proper time s is motivated by, and consistent with, particular cases where the heat kernel has been calculated exactly, namely in de Sitter spacetime, in the Einstein static universe, and in the linearly expanding spatially flat Friedmann-Robertson-Walker (FRW) universe. This large-s asymptotic form generalizes somewhat the one suggested by the Gaussian approximation and the R-summed form of the propagator that earlier served as a theoretical basis for the VCDM model. The vacuum expectation value for the energy-momentum tensor of the free scalar field, obtained through variation of the effective action, exhibits a resonance effect when the scalar curvature R of the spacetime reaches a particular value related to the mass of the field. Modeling our Universe by an FRW spacetime filled with classical matter and radiation, we show that the back reaction caused by this resonance drives the Universe through a transition to an accelerating expansion phase, very much in the same way as originally proposed by Parker and Raval. Our analysis includes higher derivatives that were neglected in the earlier analysis, and takes into account the possible runaway solutions that can follow from these higher-derivative terms. We find that the runaway solutions do
Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.
2017-11-01
There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.
Directory of Open Access Journals (Sweden)
A.R Salari Kia
2014-04-01
Full Text Available Pistachio has a special ranking among Iranian agricultural products. Iran is known as the largest producer and exporter of pistachio in the world. Agricultural products are imposed under different thermal treatments during storage and processing. Designing all these processes requires thermal parameters of the products such as specific heat capacity. Regarding the importance of pistachio processing as an exportable product, in this study the specific heat capacity of nut and kernel of two varieties of Iranian pistachio (Kalle-Ghochi and Badami were investigated at four levels of moisture content (initial moisture content (5%, 15%, 25% and 40% w.b. and three levels of temperature (40, 50 and 60°C. In both varieties, the differences between the data were significant at the 1% of probability; however, the effect of moisture content was greater than that of temperature. The results indicated that the specific heat capacity of both nuts and kernels increase logarithmically with increase of moisture content and also increase linearly with increase of temperature. This parameter has altered for nut and kernel of Kalle-Ghochi and Badami varieties within the range of 1.039-2.936 kJ kg-1 K-1, 1.236-3.320 kJ kg-1 K-1, 0.887-2.773 kJ kg-1 K-1 and 0.811-2.914 kJ kg-1 K-1, respectively. Moreover, for any given level of temperature, the specific heat capacity of kernels was higher than that of nuts. Finally, regression models with high R2 values were developed to predict the specific heat capacity of pistachio varieties as a function of moisture content and temperature
Dirichlet and Related Distributions Theory, Methods and Applications
Ng, Kai Wang; Tang, Man-Lai
2011-01-01
The Dirichlet distribution appears in many areas of application, which include modelling of compositional data, Bayesian analysis, statistical genetics, and nonparametric inference. This book provides a comprehensive review of the Dirichlet distribution and two extended versions, the Grouped Dirichlet Distribution (GDD) and the Nested Dirichlet Distribution (NDD), arising from likelihood and Bayesian analysis of incomplete categorical data and survey data with non-response. The theoretical properties and applications are also reviewed in detail for other related distributions, such as the inve
Asymptotics of Laplace-Dirichlet integrals
International Nuclear Information System (INIS)
Kozlov, S.M.
1990-01-01
Here we consider the problem of the asymptotic expansion of the Laplace-Dirichlet integral. In homogenization theory such an integral represents the energy, and in general depends on the cohomology class. Here the asymptotic behaviour of this integral is found. The full text will appear in Functional Analysis and Applications, 1990, No.2. (author). 3 refs
Directory of Open Access Journals (Sweden)
Nesseim, TDT.
2017-01-01
Full Text Available Jatropha curcas is a tropical plant belonging to the Euphorbiaceae family whose cultivation has been largely promoted in recent years for the production of biofuels. The kernel of the seed contains approximately 55% lipid in dry matter and the meal obtained could be an exceptional source of proteins for family poultry farming, after treatments to remove toxic and anti-nutritional compounds. The ingestion and the growth performance of J. curcas kernel meal (JKM, obtained after partial physico-chemical de-oiling combined or not with heating was evaluated in broiler chickens and chicks. Sixty unsexed broiler chickens, 30 day-old, divided into three groups as well as twenty broiler chicks, 1 day-old, divided into two groups were used in two experiments. In experiment 1, jatropha kernel was de-oiled and incorporated into a control fattening feed at 40 and 80g/kg (diets 4JKM1 and 8JM1. In experiment 2, jatropha kernel meal obtained in experiment 1 was heat treated and incorporated into a growing diet at 80g/kg (diet 8JKM2. Daily dietary intakes as well as weight gain of the animals were affected by the incorporation of jatropha kernel meal in the ration. In experiment 1, average daily feed intake (ADFI1 of 139.2, 55.2 and 23.4g/day/animal and also average daily weight gain (ADWG1 of 61.9, 18.5 and -7.7g/animal were obtained respectively for the groups fed with diets 0JKM1, 4JKM1 and 8JKM1. In experiment 2, Average daily feed intake (ADFI2 of 18.7 and 3.1g/day/animal and also average daily weight gain (ADWG2 of 7.1 and 1.9g/animal were obtained respectively for the groups fed with diets 0JKM2 and 8JKM2. In both experiment, feed conversion ratio (FCR was also affected by the dietary treatments and the overall mortality rate showed an increase according to levels of jatropha kernel meal in diet.
Ha, Jae-Won; Kang, Dong-Hyun
2015-07-01
The aim of this study was to investigate the efficacy of near-infrared radiation (NIR) heating combined with lactic acid (LA) sprays for inactivating Salmonella enterica serovar Enteritidis on almond and pine nut kernels and to elucidate the mechanisms of the lethal effect of the NIR-LA combined treatment. Also, the effect of the combination treatment on product quality was determined. Separately prepared S. Enteritidis phage type (PT) 30 and non-PT 30 S. Enteritidis cocktails were inoculated onto almond and pine nut kernels, respectively, followed by treatments with NIR or 2% LA spray alone, NIR with distilled water spray (NIR-DW), and NIR with 2% LA spray (NIR-LA). Although surface temperatures of nuts treated with NIR were higher than those subjected to NIR-DW or NIR-LA treatment, more S. Enteritidis survived after NIR treatment alone. The effectiveness of NIR-DW and NIR-LA was similar, but significantly more sublethally injured cells were recovered from NIR-DW-treated samples. We confirmed that the enhanced bactericidal effect of the NIR-LA combination may not be attributable to cell membrane damage per se. NIR heat treatment might allow S. Enteritidis cells to become permeable to applied LA solution. The NIR-LA treatment (5 min) did not significantly (P > 0.05) cause changes in the lipid peroxidation parameters, total phenolic contents, color values, moisture contents, and sensory attributes of nut kernels. Given the results of the present study, NIR-LA treatment may be a potential intervention for controlling food-borne pathogens on nut kernel products. Copyright © 2015, American Society for Microbiology. All Rights Reserved.
Dirichlet Process Parsimonious Mixtures for clustering
Chamroukhi, Faicel; Bartcus, Marius; Glotin, Hervé
2015-01-01
The parsimonious Gaussian mixture models, which exploit an eigenvalue decomposition of the group covariance matrices of the Gaussian mixture, have shown their success in particular in cluster analysis. Their estimation is in general performed by maximum likelihood estimation and has also been considered from a parametric Bayesian prospective. We propose new Dirichlet Process Parsimonious mixtures (DPPM) which represent a Bayesian nonparametric formulation of these parsimonious Gaussian mixtur...
A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.
Bouguila, Nizar; Ziou, Djemel
2010-01-01
In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.
Famurewa, Ademola C; Nwankwo, Onyebuchi E; Folawiyo, Abiola M; Igwe, Emeka C; Epete, Michael A; Ufebe, Odomero G
2017-01-01
The literature reports that the health benefits of vegetable oil can be deteriorated by repeated heating, which leads to lipid oxidation and the formation of free radicals. Virgin coconut oil (VCO) is emerging as a functional food oil and its health benefits are attributed to its potent polyphenolic compounds. We investigated the beneficial effect of VCO supplementation on lipid profile, liver and kidney markers in rats fed repeatedly heated palm kernel oil (HPO). Rats were divided into four groups (n = 5). The control group rats were fed with a normal diet; group 2 rats were fed a 10% VCO supplemented diet; group 3 administered 10 ml HPO/kg b.w. orally; group 4 were fed 10% VCO + 10 ml HPO/kg for 28 days. Subsequently, serum markers of liver damage (ALT, AST, ALP and albumin), kidney damage (urea, creatinine and uric acid), lipid profile and lipid ratios as cardiovascular risk indices were evaluated. HPO induced a significant increase in serum markers of liver and kidney damage as well as con- comitant lipid abnormalities and a marked reduction in serum HDL-C. The lipid ratios evaluated for atherogenic and coronary risk indices in rats administered HPO only were remarkably higher than control. It was observed that VCO supplementation attenuated the biochemical alterations, including the indices of cardiovascular risks. VCO supplementation demonstrates beneficial health effects against HPO-induced biochemical alterations in rats. VCO may serve to modulate the adverse effects associated with consumption of repeatedly heated palm kernel oil.
Modeling Word Burstiness Using the Dirichlet Distribution
DEFF Research Database (Denmark)
Madsen, Rasmus Elsborg; Kauchak, David; Elkan, Charles
2005-01-01
Multinomial distributions are often used to model text documents. However, they do not capture well the phenomenon that words in a document tend to appear in bursts: if a word appears once, it is more likely to appear again. In this paper, we propose the Dirichlet compound multinomial model (DCM......) as an alternative to the multinomial. The DCM model has one additional degree of freedom, which allows it to capture burstiness. We show experimentally that the DCM is substantially better than the multinomial at modeling text data, measured by perplexity. We also show using three standard document collections...
Bayesian Kernel Mixtures for Counts.
Canale, Antonio; Dunson, David B
2011-12-01
Although Bayesian nonparametric mixture models for continuous data are well developed, there is a limited literature on related approaches for count data. A common strategy is to use a mixture of Poissons, which unfortunately is quite restrictive in not accounting for distributions having variance less than the mean. Other approaches include mixing multinomials, which requires finite support, and using a Dirichlet process prior with a Poisson base measure, which does not allow smooth deviations from the Poisson. As a broad class of alternative models, we propose to use nonparametric mixtures of rounded continuous kernels. An efficient Gibbs sampler is developed for posterior computation, and a simulation study is performed to assess performance. Focusing on the rounded Gaussian case, we generalize the modeling framework to account for multivariate count data, joint modeling with continuous and categorical variables, and other complications. The methods are illustrated through applications to a developmental toxicity study and marketing data. This article has supplementary material online.
Harmonic Function of Poincare Cone Condition In Solving Dirichlet ...
African Journals Online (AJOL)
Harmonic Function of Poincare Cone Condition In Solving Dirichlet Problem. ... Journal of the Nigerian Association of Mathematical Physics ... theorem, the dirichlet problem and maximum principle where we conclude that the application of sums , differences and scalar multiples of harmonic functions are again harmonic.
Path integral solution of the Dirichlet problem
International Nuclear Information System (INIS)
LaChapelle, J.
1997-01-01
A scheme for functional integration developed by Cartier/DeWitt-Morette is first reviewed and then employed to construct the path integral representation for the solution of the Dirichlet problem in terms of first exit time. The path integral solution is then applied to calculate the fixed-energy point-to-point transition amplitude both in configuration and phase space. The path integral solution can also be derived using physical principles based on Feynman close-quote s original reasoning. We check that the Fourier transform in energy of the fixed-energy point-to-point transition amplitude gives the well known time-dependent transition amplitude, and calculate the WKB approximation. copyright 1997 Academic Press, Inc
Dirichlet problem on the upper half space
Indian Academy of Sciences (India)
2School of Mathematics and Information Science, Henan University of Economics and ... The classical Poisson kernel for H is defined by P(x,y ) = 2xnω .... [5] Siegel D and Talvila E, Sharp growth estimates for modified Poisson integrals in a ...
Quantum “violation” of Dirichlet boundary condition
Directory of Open Access Journals (Sweden)
I.Y. Park
2017-02-01
Full Text Available Dirichlet boundary conditions have been widely used in general relativity. They seem at odds with the holographic property of gravity simply because a boundary configuration can be varying and dynamic instead of dying out as required by the conditions. In this work we report what should be a tension between the Dirichlet boundary conditions and quantum gravitational effects, and show that a quantum-corrected black hole solution of the 1PI action no longer obeys, in the naive manner one may expect, the Dirichlet boundary conditions imposed at the classical level. We attribute the ‘violation’ of the Dirichlet boundary conditions to a certain mechanism of the information storage on the boundary.
Quantum “violation” of Dirichlet boundary condition
Energy Technology Data Exchange (ETDEWEB)
Park, I.Y., E-mail: inyongpark05@gmail.com
2017-02-10
Dirichlet boundary conditions have been widely used in general relativity. They seem at odds with the holographic property of gravity simply because a boundary configuration can be varying and dynamic instead of dying out as required by the conditions. In this work we report what should be a tension between the Dirichlet boundary conditions and quantum gravitational effects, and show that a quantum-corrected black hole solution of the 1PI action no longer obeys, in the naive manner one may expect, the Dirichlet boundary conditions imposed at the classical level. We attribute the ‘violation’ of the Dirichlet boundary conditions to a certain mechanism of the information storage on the boundary.
On the connection between Schroedinger- and Dirichlet forms
International Nuclear Information System (INIS)
Albeverio, S.; Bochum Univ.; Gesztesy, F.; Karwowski, W.; Streit, L.; Bielefeld Univ.
Relations between Schroedinger forms associated with Schroedinger operators in L 2 (Ω;dsup(n)x), Ω is contained in Rsup(n) open, n >= 1 and the corresponding Dirichlet forms are investigated. Various concrete examples are presented. (orig.)
Dirichlet Characters, Gauss Sums, and Inverse Z Transform
Gao, Jing; Liu, Huaning
2012-01-01
A generalized Möbius transform is presented. It is based on Dirichlet characters. A general algorithm is developed to compute the inverse $Z$ transform on the unit circle, and an error estimate is given for the truncated series representation.
Dirichlet problem for quasi-linear elliptic equations
Directory of Open Access Journals (Sweden)
Azeddine Baalal
2002-10-01
Full Text Available We study the Dirichlet Problem associated to the quasilinear elliptic problem $$ -sum_{i=1}^{n}frac{partial }{partial x_i}mathcal{A}_i(x,u(x, abla u(x+mathcal{B}(x,u(x,abla u(x=0. $$ Then we define a potential theory related to this problem and we show that the sheaf of continuous solutions satisfies the Bauer axiomatic theory. Submitted April 9, 2002. Published October 2, 2002. Math Subject Classifications: 31C15, 35B65, 35J60. Key Words: Supersolution; Dirichlet problem; obstacle problem; nonlinear potential theory.
Dirichlet problem for Hermitian-Einstein equations over almost Hermitian manifolds
International Nuclear Information System (INIS)
Xi Zhang
2004-07-01
In this paper, we investigate the Dirichlet problem for Hermitian-Einstein equations on complex vector bundle over almost Hermitian manifolds, and we obtain the unique solubility of the Dirichlet problem for Hermitian-Einstein equations. (author)
Dirichlet's problem on a cracked trapezium | Zongo | Global Journal ...
African Journals Online (AJOL)
This paper deals with solving Poisson's equation with conditions on Dirichlet's limits in an isosceles trapezium with two cracks. The large singular finite elements method used gives satisfactory results in all the domain of study. Numerical values obtained are very accurate for the constraint function and its first derivatives ...
Regularity of spectral fractional Dirichlet and Neumann problems
DEFF Research Database (Denmark)
Grubb, Gerd
2016-01-01
Consider the fractional powers and of the Dirichlet and Neumann realizations of a second-order strongly elliptic differential operator A on a smooth bounded subset Ω of . Recalling the results on complex powers and complex interpolation of domains of elliptic boundary value problems by Seeley in ...
On a stochastic Burgers equation with Dirichlet boundary conditions
Directory of Open Access Journals (Sweden)
Ekaterina T. Kolkovska
2003-01-01
Full Text Available We consider the one-dimensional Burgers equation perturbed by a white noise term with Dirichlet boundary conditions and a non-Lipschitz coefficient. We obtain existence of a weak solution proving tightness for a sequence of polygonal approximations for the equation and solving a martingale problem for the weak limit.
Stability of Nonlinear Dirichlet BVPs Governed by Fractional Laplacian
Directory of Open Access Journals (Sweden)
Dorota Bors
2014-01-01
Dirichlet boundary data. Some sufficient condition under which the solutions of the equations considered depend continuously on parameters is stated. The application of the results to some optimal control problem is presented. The methods applied in the paper make use of the variational structure of the problem.
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2016-01-01
To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...
Invariant length scale in relativistic kinematics: lessons from Dirichlet branes
International Nuclear Information System (INIS)
Schuller, Frederic P.; Pfeiffer, Hendryk
2004-01-01
Dirac-Born-Infeld theory is shown to possess a hidden invariance associated with its maximal electric field strength. The local Lorentz symmetry O(1,n) on a Dirichlet-n-brane is thereby enhanced to an O(1,n)xO(1,n) gauge group, encoding both an invariant velocity and acceleration (or length) scale. The presence of this enlarged gauge group predicts consequences for the kinematics of observers on Dirichlet branes, with admissible accelerations being bounded from above. An important lesson is that the introduction of a fundamental length scale into relativistic kinematics does not enforce a deformation of Lorentz boosts, as one might assume naively. The exhibited structures further show that Moffat's non-symmetric gravitational theory qualifies as a candidate for a consistent Born-Infeld type gravity with regulated solutions
Tractography segmentation using a hierarchical Dirichlet processes mixture model.
Wang, Xiaogang; Grimson, W Eric L; Westin, Carl-Fredrik
2011-01-01
In this paper, we propose a new nonparametric Bayesian framework to cluster white matter fiber tracts into bundles using a hierarchical Dirichlet processes mixture (HDPM) model. The number of clusters is automatically learned driven by data with a Dirichlet process (DP) prior instead of being manually specified. After the models of bundles have been learned from training data without supervision, they can be used as priors to cluster/classify fibers of new subjects for comparison across subjects. When clustering fibers of new subjects, new clusters can be created for structures not observed in the training data. Our approach does not require computing pairwise distances between fibers and can cluster a huge set of fibers across multiple subjects. We present results on several data sets, the largest of which has more than 120,000 fibers. Copyright © 2010 Elsevier Inc. All rights reserved.
A Dirichlet process mixture model for brain MRI tissue classification.
Ferreira da Silva, Adelino R
2007-04-01
Accurate classification of magnetic resonance images according to tissue type or region of interest has become a critical requirement in diagnosis, treatment planning, and cognitive neuroscience. Several authors have shown that finite mixture models give excellent results in the automated segmentation of MR images of the human normal brain. However, performance and robustness of finite mixture models deteriorate when the models have to deal with a variety of anatomical structures. In this paper, we propose a nonparametric Bayesian model for tissue classification of MR images of the brain. The model, known as Dirichlet process mixture model, uses Dirichlet process priors to overcome the limitations of current parametric finite mixture models. To validate the accuracy and robustness of our method we present the results of experiments carried out on simulated MR brain scans, as well as on real MR image data. The results are compared with similar results from other well-known MRI segmentation methods.
Finding A Minimally Informative Dirichlet Prior Using Least Squares
International Nuclear Information System (INIS)
Kelly, Dana
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
Dirichlet Component Regression and its Applications to Psychiatric Data
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2008-01-01
We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is com...
Finding a minimally informative Dirichlet prior distribution using least squares
International Nuclear Information System (INIS)
Kelly, Dana; Atwood, Corwin
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
On Polya's inequality for torsional rigidity and first Dirichlet eigenvalue
Berg, M. van den; Ferone, V.; Nitsch, C.; Trombetti, C.
2016-01-01
Let $\\Omega$ be an open set in Euclidean space with finite Lebesgue measure $|\\Omega|$. We obtain some properties of the set function $F:\\Omega\\mapsto \\R^+$ defined by $$ F(\\Omega)=\\frac{T(\\Omega)\\lambda_1(\\Omega)}{|\\Omega|} ,$$ where $T(\\Omega)$ and $\\lambda_1(\\Omega)$ are the torsional rigidity and the first eigenvalue of the Dirichlet Laplacian respectively. We improve the classical P\\'olya bound $F(\\Omega)\\le 1,$ and show that $$F(\\Omega)\\le 1- \
Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares
International Nuclear Information System (INIS)
Kelly, Dana; Atwood, Corwin
2011-01-01
In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.
General Dirichlet Series, Arithmetic Convolution Equations and Laplace Transforms
Czech Academy of Sciences Publication Activity Database
Glöckner, H.; Lucht, L.G.; Porubský, Štefan
2009-01-01
Roč. 193, č. 2 (2009), s. 109-129 ISSN 0039-3223 R&D Projects: GA ČR GA201/07/0191 Institutional research plan: CEZ:AV0Z10300504 Keywords : arithmetic function * Dirichlet convolution * polynomial equation * analytic equation * topological algebra * holomorphic functional calculus * implicit function theorem * Laplace transform * semigroup * complex measure Subject RIV: BA - General Mathematics Impact factor: 0.645, year: 2009 http://arxiv.org/abs/0712.3172
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Aleiferis, P.G.; Taylor, A.M.K.P. [Imperial College of Science, Technology and Medicine, London (United Kingdom). Dept. of Mechanical Engineering; Ishii, K. [Honda International Technical School, Saitama (Japan); Urata, Y. [Honda R and D Co., Ltd., Tochigi (Japan). Tochigi R and D Centre
2004-04-01
The potential of lean combustion for the reduction in exhaust emissions and fuel consumption in spark ignition engines has long been established. However, the operating range of lean-burn spark ignition engines is limited by the level of cyclic variability in the early-flame development stage that typically corresponds to the 0-5 per cent mass fraction burned duration. In the current study, the cyclic variations in early flame development were investigated in an optical stratified-charge spark ignition engine at conditions close to stoichiometry [air-to-fuel ratio (A/F) = 15] and to the lean limit of stable operation (A/F = 22). Flame images were acquired through either a pentroof window ('tumble plane' of view) or the piston crown ('swirl plane' of view) and these were processed to calculate the intra-cycle flame-kernel radius evolution. In order to quantify the relative effects of local fuel concentration, gas motion, spark-energy release and heat losses to the electrodes on the flame-kernel growth rate, a zero-dimensional flame-kernel growth model, in conjunction with a one-dimensional spark ignition model, was employed. Comparison of the calculated flame-radius evolutions with the experimental data suggested that a variation in A/F around the spark plug of {delta}(A/F) {approx} 4 or, in terms of equivalence ratio {phi}, a variation in {delta}{phi} {approx} 0.15 at most was large enough to account for 100 per cent of the observed cyclic variability in flame-kernel radius. A variation in the residual-gas fraction of about 20 per cent around the mean was found to account for up to 30 per cent of the variability in flame-kernel radius at the timing of 5 per cent mass fraction burned. The individual effect of 20 per cent variations in the 'mean' in-cylinder velocity at the spark plug at ignition timing was found to account for no more than 20 per cent of the measured cyclic variability in flame kernel radius. An individual effect of
Optimized Kernel Entropy Components.
Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau
2017-06-01
This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...
On the Dirichlet problem for an elliptic equation
Directory of Open Access Journals (Sweden)
Anatolii K. Gushchin
2015-03-01
Full Text Available It is well known that the concept of a generalized solution from the Sobolev space $ W_2 ^ 1 $ of the Dirichlet problem for a second order elliptic equation is not a generalization of the classical solution sensu stricto: not every continuous function on the domain boundary is a trace of some function from $ W_2 ^ 1$. The present work is dedicated to the memory of Valentin Petrovich Mikhailov, who proposed a generalization of both these concepts. In the Mikhailov's definition the boundary values of the solution are taken from the $ L_2 $; this definition extends naturally to the case of boundary functions from $ L_p$, $p> 1 $. Subsequently, the author of this work has shown that solutions have the property $ (n-1 $-dimensional continuity; $ n $ is a dimension of the space in which we consider the problem. This property is similar to the classical definition of uniform continuity, but traces of this function on the measures from a special class should be considered instead of values of the function at points. This class is a little more narrow than the class of Carleson measures. The trace of function on the measure is an element of $ L_p $ with respect to this measure. The property $ (n-1 $-dimensional continuity makes it possible to give another definition of the solution of the Dirichlet problem (a definition of $(n-1$-dimensionally continuous solution, which is in the form close to the classical one. This definition does not require smoothness of the boundary. The Dirichlet problem in the Mikhailov's formulation and especially for the $(n-1$-dimensionally continuous solution was studied insufficiently (in contrast to the cases of classical and generalized solutions. First of all, it refers to conditions on the right side of the equation, in which the Dirichlet problem is solvable. In this article the new results in this direction are presented. In addition, we discuss the conditions on the coefficients of the equation and the conditions on
FDM for Elliptic Equations with Bitsadze-Samarskii-Dirichlet Conditions
Directory of Open Access Journals (Sweden)
Allaberen Ashyralyev
2012-01-01
Full Text Available A numerical method is proposed for solving nonlocal boundary value problem for the multidimensional elliptic partial differential equation with the Bitsadze-Samarskii-Dirichlet condition. The first and second-orders of accuracy stable difference schemes for the approximate solution of this nonlocal boundary value problem are presented. The stability estimates, coercivity, and almost coercivity inequalities for solution of these schemes are established. The theoretical statements for the solutions of these nonlocal elliptic problems are supported by results of numerical examples.
Prior Elicitation, Assessment and Inference with a Dirichlet Prior
Directory of Open Access Journals (Sweden)
Michael Evans
2017-10-01
Full Text Available Methods are developed for eliciting a Dirichlet prior based upon stating bounds on the individual probabilities that hold with high prior probability. This approach to selecting a prior is applied to a contingency table problem where it is demonstrated how to assess the prior with respect to the bias it induces as well as how to check for prior-data conflict. It is shown that the assessment of a hypothesis via relative belief can easily take into account what it means for the falsity of the hypothesis to correspond to a difference of practical importance and provide evidence in favor of a hypothesis.
Energy Technology Data Exchange (ETDEWEB)
Duff, I.
1994-12-31
This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.
Dirichlet Higgs in Extra-Dimension Consistent with Electroweak Data
International Nuclear Information System (INIS)
Naoyuki Habay; Kin-ya Odaz; Ryo Takahashi
2011-01-01
We propose a simple five-dimensional extension of the Standard Model (SM) without any Higgs potential nor any extra fields. A Higgs doublet lives in the bulk of a flat line segment and its boundary condition is Dirichlet at the ends of the line, which causes the electroweak symmetry breaking without Higgs potential. The vacuum expectation value of the Higgs is induced from the Dirichlet boundary condition which is generally allowed in higher dimensional theories. The lightest physical Higgs has non-flat profile in the extra dimension even though the vacuum expectation value is flat. As a consequence, we predict a maximal top Yukawa deviation (no coupling between top and Higgs) for the brane-localized fermion and a small deviation, a multiplication of 2√2/π ≅ 0.9 to the Yukawa coupling, for the bulk fermion. The latter is consistent with the electroweak precision data within 90% C.L. for 430 GeV ≤ m KK ≤ 500 GeV. (authors)
Clustering disaggregated load profiles using a Dirichlet process mixture model
International Nuclear Information System (INIS)
Granell, Ramon; Axon, Colin J.; Wallom, David C.H.
2015-01-01
Highlights: • We show that the Dirichlet process mixture model is scaleable. • Our model does not require the number of clusters as an input. • Our model creates clusters only by the features of the demand profiles. • We have used both residential and commercial data sets. - Abstract: The increasing availability of substantial quantities of power-use data in both the residential and commercial sectors raises the possibility of mining the data to the advantage of both consumers and network operations. We present a Bayesian non-parametric model to cluster load profiles from households and business premises. Evaluators show that our model performs as well as other popular clustering methods, but unlike most other methods it does not require the number of clusters to be predetermined by the user. We used the so-called ‘Chinese restaurant process’ method to solve the model, making use of the Dirichlet-multinomial distribution. The number of clusters grew logarithmically with the quantity of data, making the technique suitable for scaling to large data sets. We were able to show that the model could distinguish features such as the nationality, household size, and type of dwelling between the cluster memberships
Estimates of the first Dirichlet eigenvalue from exit time moment spectra
DEFF Research Database (Denmark)
Hurtado, Ana; Markvorsen, Steen; Palmer, Vicente
2013-01-01
We compute the first Dirichlet eigenvalue of a geodesic ball in a rotationally symmetric model space in terms of the moment spectrum for the Brownian motion exit times from the ball. This expression implies an estimate as exact as you want for the first Dirichlet eigenvalue of a geodesic ball...
Classification With Truncated Distance Kernel.
Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas
2018-05-01
This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.
Gärtner, Thomas
2009-01-01
This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by
Locally linear approximation for Kernel methods : the Railway Kernel
Muñoz, Alberto; González, Javier
2008-01-01
In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...
Motai, Yuichi
2015-01-01
Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include
Dirichlet Component Regression and its Applications to Psychiatric Data.
Gueorguieva, Ralitza; Rosenheck, Robert; Zelterman, Daniel
2008-08-15
We describe a Dirichlet multivariable regression method useful for modeling data representing components as a percentage of a total. This model is motivated by the unmet need in psychiatry and other areas to simultaneously assess the effects of covariates on the relative contributions of different components of a measure. The model is illustrated using the Positive and Negative Syndrome Scale (PANSS) for assessment of schizophrenia symptoms which, like many other metrics in psychiatry, is composed of a sum of scores on several components, each in turn, made up of sums of evaluations on several questions. We simultaneously examine the effects of baseline socio-demographic and co-morbid correlates on all of the components of the total PANSS score of patients from a schizophrenia clinical trial and identify variables associated with increasing or decreasing relative contributions of each component. Several definitions of residuals are provided. Diagnostics include measures of overdispersion, Cook's distance, and a local jackknife influence metric.
Fast Bayesian Inference in Dirichlet Process Mixture Models.
Wang, Lianming; Dunson, David B
2011-01-01
There has been increasing interest in applying Bayesian nonparametric methods in large samples and high dimensions. As Markov chain Monte Carlo (MCMC) algorithms are often infeasible, there is a pressing need for much faster algorithms. This article proposes a fast approach for inference in Dirichlet process mixture (DPM) models. Viewing the partitioning of subjects into clusters as a model selection problem, we propose a sequential greedy search algorithm for selecting the partition. Then, when conjugate priors are chosen, the resulting posterior conditionally on the selected partition is available in closed form. This approach allows testing of parametric models versus nonparametric alternatives based on Bayes factors. We evaluate the approach using simulation studies and compare it with four other fast nonparametric methods in the literature. We apply the proposed approach to three datasets including one from a large epidemiologic study. Matlab codes for the simulation and data analyses using the proposed approach are available online in the supplemental materials.
Two-point correlation function for Dirichlet L-functions
Bogomolny, E.; Keating, J. P.
2013-03-01
The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy-Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question.
Two-point correlation function for Dirichlet L-functions
International Nuclear Information System (INIS)
Bogomolny, E; Keating, J P
2013-01-01
The two-point correlation function for the zeros of Dirichlet L-functions at a height E on the critical line is calculated heuristically using a generalization of the Hardy–Littlewood conjecture for pairs of primes in arithmetic progression. The result matches the conjectured random-matrix form in the limit as E → ∞ and, importantly, includes finite-E corrections. These finite-E corrections differ from those in the case of the Riemann zeta-function, obtained in Bogomolny and Keating (1996 Phys. Rev. Lett. 77 1472), by certain finite products of primes which divide the modulus of the primitive character used to construct the L-function in question. (paper)
Weyl Group Multiple Dirichlet Series Type A Combinatorial Theory (AM-175)
Brubaker, Ben; Friedberg, Solomon
2011-01-01
Weyl group multiple Dirichlet series are generalizations of the Riemann zeta function. Like the Riemann zeta function, they are Dirichlet series with analytic continuation and functional equations, having applications to analytic number theory. By contrast, these Weyl group multiple Dirichlet series may be functions of several complex variables and their groups of functional equations may be arbitrary finite Weyl groups. Furthermore, their coefficients are multiplicative up to roots of unity, generalizing the notion of Euler products. This book proves foundational results about these series an
Directory of Open Access Journals (Sweden)
Nur Asiah Mohd Makhatar
2016-09-01
Full Text Available A numerical investigation is carried out into the flow and heat transfer within a fully-developed mixed convection flow of water–alumina (Al2O3–water, water–titania (TiO2–water and water–copperoxide (CuO–water in a vertical channel by considering Dirichlet, Neumann and Robin boundary conditions. Actual values of thermophysical quantities are used in arriving at conclusions on the three nanoliquids. The Biot number influences on velocity and temperature distributions are opposite in regions close to the left wall and the right wall. Robin condition is seen to favour symmetry in the flow velocity whereas Dirichlet and Neumann conditions skew the flow distribution and push the point of maximum velocity to the right of the channel. A reversal of role is seen between them in their influence on the flow in the left-half and the right-half of the channel. This leads to related consequences in heat transport. Viscous dissipation is shown to aid flow and heat transport. The present findings reiterate the observation on heat transfer in other configurations that only low concentrations of nanoparticles facilitate enhanced heat transport for all three temperature conditions. Significant change was observed in Neumann condition, whereas the changes are too extreme in Dirichlet condition. It is found that Robin condition is the most stable condition. Further, it is also found that all three nanoliquids have enhanced heat transport compared to that by base liquid, with CuO–water nanoliquid shows higher enhancement in its Nusselt number, compared to Al2O3 and TiO2.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger
2009-01-01
and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...
Adaptive metric kernel regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
2000-01-01
Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...
Adaptive Metric Kernel Regression
DEFF Research Database (Denmark)
Goutte, Cyril; Larsen, Jan
1998-01-01
Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...
Modeling Information Content Via Dirichlet-Multinomial Regression Analysis.
Ferrari, Alberto
2017-01-01
Shannon entropy is being increasingly used in biomedical research as an index of complexity and information content in sequences of symbols, e.g. languages, amino acid sequences, DNA methylation patterns and animal vocalizations. Yet, distributional properties of information entropy as a random variable have seldom been the object of study, leading to researchers mainly using linear models or simulation-based analytical approach to assess differences in information content, when entropy is measured repeatedly in different experimental conditions. Here a method to perform inference on entropy in such conditions is proposed. Building on results coming from studies in the field of Bayesian entropy estimation, a symmetric Dirichlet-multinomial regression model, able to deal efficiently with the issue of mean entropy estimation, is formulated. Through a simulation study the model is shown to outperform linear modeling in a vast range of scenarios and to have promising statistical properties. As a practical example, the method is applied to a data set coming from a real experiment on animal communication.
The Dirichlet problem of a conformable advection-diffusion equation
Directory of Open Access Journals (Sweden)
Avci Derya
2017-01-01
Full Text Available The fractional advection-diffusion equations are obtained from a fractional power law for the matter flux. Diffusion processes in special types of porous media which has fractal geometry can be modelled accurately by using these equations. However, the existing nonlocal fractional derivatives seem complicated and also lose some basic properties satisfied by usual derivatives. For these reasons, local fractional calculus has recently been emerged to simplify the complexities of fractional models defined by nonlocal fractional operators. In this work, the conformable, a local, well-behaved and limit-based definition, is used to obtain a local generalized form of advection-diffusion equation. In addition, this study is devoted to give a local generalized description to the combination of diffusive flux governed by Fick’s law and the advection flux associated with the velocity field. As a result, the constitutive conformable advection-diffusion equation can be easily achieved. A Dirichlet problem for conformable advection-diffusion equation is derived by applying fractional Laplace transform with respect to time t and finite sin-Fourier transform with respect to spatial coordinate x. Two illustrative examples are presented to show the behaviours of this new local generalized model. The dependence of the solution on the fractional order of conformable derivative and the changing values of problem parameters are validated using graphics held by MATLcodes.
A Probabilistic Recommendation Method Inspired by Latent Dirichlet Allocation Model
Directory of Open Access Journals (Sweden)
WenBo Xie
2014-01-01
Full Text Available The recent decade has witnessed an increasing popularity of recommendation systems, which help users acquire relevant knowledge, commodities, and services from an overwhelming information ocean on the Internet. Latent Dirichlet Allocation (LDA, originally presented as a graphical model for text topic discovery, now has found its application in many other disciplines. In this paper, we propose an LDA-inspired probabilistic recommendation method by taking the user-item collecting behavior as a two-step process: every user first becomes a member of one latent user-group at a certain probability and each user-group will then collect various items with different probabilities. Gibbs sampling is employed to approximate all the probabilities in the two-step process. The experiment results on three real-world data sets MovieLens, Netflix, and Last.fm show that our method exhibits a competitive performance on precision, coverage, and diversity in comparison with the other four typical recommendation methods. Moreover, we present an approximate strategy to reduce the computing complexity of our method with a slight degradation of the performance.
Kernel methods for deep learning
Cho, Youngmin
2012-01-01
We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads
2011-01-01
In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...
Spafford, Eugene H.; Mckendry, Martin S.
1986-01-01
An overview of the internal structure of the Clouds kernel was presented. An indication of how these structures will interact in the prototype Clouds implementation is given. Many specific details have yet to be determined and await experimentation with an actual working system.
Viscosity kernel of molecular fluids
DEFF Research Database (Denmark)
Puscasu, Ruslan; Todd, Billy; Daivis, Peter
2010-01-01
, temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...
Variable Kernel Density Estimation
Terrell, George R.; Scott, David W.
1992-01-01
We investigate some of the possibilities for improvement of univariate and multivariate kernel density estimates by varying the window over the domain of estimation, pointwise and globally. Two general approaches are to vary the window width by the point of estimation and by point of the sample observation. The first possibility is shown to be of little efficacy in one variable. In particular, nearest-neighbor estimators in all versions perform poorly in one and two dimensions, but begin to b...
Steerability of Hermite Kernel
Czech Academy of Sciences Publication Activity Database
Yang, Bo; Flusser, Jan; Suk, Tomáš
2013-01-01
Roč. 27, č. 4 (2013), 1354006-1-1354006-25 ISSN 0218-0014 R&D Projects: GA ČR GAP103/11/1552 Institutional support: RVO:67985556 Keywords : Hermite polynomials * Hermite kernel * steerability * adaptive filtering Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.558, year: 2013 http://library.utia.cas.cz/separaty/2013/ZOI/yang-0394387. pdf
Kernel Machine SNP-set Testing under Multiple Candidate Kernels
Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.
2013-01-01
Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868
Smolka, Gert
1994-01-01
Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...
7 CFR 981.408 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...
7 CFR 981.8 - Inedible kernel.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...
Existence of weak solutions to first-order stationary mean-field games with Dirichlet conditions
Ferreira, Rita; Gomes, Diogo A.; Tada, Teruo
2018-01-01
In this paper, we study first-order stationary monotone mean-field games (MFGs) with Dirichlet boundary conditions. While for Hamilton--Jacobi equations Dirichlet conditions may not be satisfied, here, we establish the existence of solutions of MFGs that satisfy those conditions. To construct these solutions, we introduce a monotone regularized problem. Applying Schaefer's fixed-point theorem and using the monotonicity of the MFG, we verify that there exists a unique weak solution to the regularized problem. Finally, we take the limit of the solutions of the regularized problem and using Minty's method, we show the existence of weak solutions to the original MFG.
Existence of weak solutions to first-order stationary mean-field games with Dirichlet conditions
Ferreira, Rita
2018-04-19
In this paper, we study first-order stationary monotone mean-field games (MFGs) with Dirichlet boundary conditions. While for Hamilton--Jacobi equations Dirichlet conditions may not be satisfied, here, we establish the existence of solutions of MFGs that satisfy those conditions. To construct these solutions, we introduce a monotone regularized problem. Applying Schaefer\\'s fixed-point theorem and using the monotonicity of the MFG, we verify that there exists a unique weak solution to the regularized problem. Finally, we take the limit of the solutions of the regularized problem and using Minty\\'s method, we show the existence of weak solutions to the original MFG.
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger
2011-01-01
We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...
Clustering via Kernel Decomposition
DEFF Research Database (Denmark)
Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan
2006-01-01
Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....
Cuahutenango-Barro, B.; Taneco-Hernández, M. A.; Gómez-Aguilar, J. F.
2017-12-01
Analytical solutions of the wave equation with bi-fractional-order and frictional memory kernel of Mittag-Leffler type are obtained via Caputo-Fabrizio fractional derivative in the Liouville-Caputo sense. Through the method of separation of variables and Laplace transform method we derive closed-form solutions and establish fundamental solutions. Special cases with homogeneous Dirichlet boundary conditions and nonhomogeneous initial conditions, as well as for the external force are considered. Numerical simulations of the special solutions were done and novel behaviors are obtained.
Global Polynomial Kernel Hazard Estimation
DEFF Research Database (Denmark)
Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch
2015-01-01
This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...
Self-Commutators of Composition Operators with Monomial Symbols on the Dirichlet Space
Directory of Open Access Journals (Sweden)
A. Abdollahi
2011-01-01
Full Text Available Let (=,∈, for some positive integer and the composition operator on the Dirichlet space induced by . In this paper, we completely determine the point spectrum, spectrum, essential spectrum, and essential norm of the operators ∗,∗ and self-commutators of , which expose that the spectrum and point spectrum coincide. We also find the eigenfunctions of the operators.
Directory of Open Access Journals (Sweden)
Tengfei Shen
2015-12-01
Full Text Available This paper deals with the multiplicity of solutions for Dirichlet boundary conditions of second-order quasilinear equations with impulsive effects. By using critical point theory, a new result is obtained. An example is given to illustrate the main result.
Czech Academy of Sciences Publication Activity Database
Exner, Pavel; Barseghyan, Diana
2013-01-01
Roč. 3, č. 4 (2013), s. 465-484 ISSN 1664-039X R&D Projects: GA ČR GAP203/11/0701 Institutional support: RVO:61389005 Keywords : Dirichlet Laplacian * cusp-shaped region * Lieb-Thirring inequalities * bending and twisting Subject RIV: BE - Theoretical Physics
Well-posedness of the second-order linear singular Dirichlet problem
Czech Academy of Sciences Publication Activity Database
Lomtatidze, Alexander; Opluštil, Z.
2015-01-01
Roč. 22, č. 3 (2015), s. 409-419 ISSN 1072-947X Institutional support: RVO:67985840 Keywords : singular Dirichlet problem * well-posedness Subject RIV: BA - General Mathematics Impact factor: 0.417, year: 2015 http://www.degruyter.com/view/j/gmj.2015.22.issue-3/gmj-2015-0023/gmj-2015-0023. xml
Denis Valle; Benjamin Baiser; Christopher W. Woodall; Robin Chazdon; Jerome. Chave
2014-01-01
We propose a novel multivariate method to analyse biodiversity data based on the Latent Dirichlet Allocation (LDA) model. LDA, a probabilistic model, reduces assemblages to sets of distinct component communities. It produces easily interpretable results, can represent abrupt and gradual changes in composition, accommodates missing data and allows for coherent estimates...
Full Text or Abstract? : Examining Topic Coherence Scores Using Latent Dirichlet Allocation
Syed, S.; Spruit, M.
2017-01-01
This paper assesses topic coherence and human topic ranking of uncovered latent topics from scientific publications when utilizing the topic model latent Dirichlet allocation (LDA) on abstract and full-text data. The coherence of a topic, used as a proxy for topic quality, is based on the
Bruemmer, David J [Idaho Falls, ID
2009-11-17
A robot platform includes perceptors, locomotors, and a system controller. The system controller executes a robot intelligence kernel (RIK) that includes a multi-level architecture and a dynamic autonomy structure. The multi-level architecture includes a robot behavior level for defining robot behaviors, that incorporate robot attributes and a cognitive level for defining conduct modules that blend an adaptive interaction between predefined decision functions and the robot behaviors. The dynamic autonomy structure is configured for modifying a transaction capacity between an operator intervention and a robot initiative and may include multiple levels with at least a teleoperation mode configured to maximize the operator intervention and minimize the robot initiative and an autonomous mode configured to minimize the operator intervention and maximize the robot initiative. Within the RIK at least the cognitive level includes the dynamic autonomy structure.
Preparation and characterization of active carbon using palm kernel ...
African Journals Online (AJOL)
Activated carbons were prepared from Palm kernel shells. Carbonization temperature was 6000C, at a residence time of 5 min for each process. Chemical activation was done by heating a mixture of carbonized material and the activating agents at a temperature of 700C to form a paste, followed by subsequent cooling and ...
Mixture Density Mercer Kernels: A Method to Learn Kernels
National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...
DEFF Research Database (Denmark)
Antón Castro, Francesc/François
2015-01-01
In this paper, we are addressing the geometric and topological invariants that arise in the exact computation of the Delone (Delaunay) graph and the Dirichlet/Voronoi tiling of N-dimensional hyperspheres using Ritt-Wu's algorithm. Our main contribution is a methodology for automated derivation...... of geometric and topological invariants of the Dirichlet tiling of N + 1-dimenional hyperspheres and its dual Delone graph from the invariants of the Dirichlet tiling of N-dimensional hyperspheres and its dual Delone graph (starting from N = 3)....
DEFF Research Database (Denmark)
Anton, François
In this paper, we are addressing the geometric and topological invariants that arise in the exact computation of the Delone (Delaunay) graph and the Dirichlet/Voronoi tiling of n-dimensional hyperspheres using Ritt-Wu's algorithm. Our main contribution is a methodology for automated derivation...... of geometric and topological invariants of the Dirichlet tiling of N + 1-dimenional hyperspheres and its dual Delone graph from the invariants of the Dirichlet tiling of N-dimensional hyperspheres and its dual Delone graph (starting from N = 3)....
National Research Council Canada - National Science Library
Gibou, Frederic; Fedkiw, Ronald
2004-01-01
In this paper, the authors first describe a fourth order accurate finite difference discretization for both the Laplace equation and the heat equation with Dirichlet boundary conditions on irregular domains...
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...
A kernel version of spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
. Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...
kernel oil by lipolytic organisms
African Journals Online (AJOL)
USER
2010-08-02
Aug 2, 2010 ... Rancidity of extracted cashew oil was observed with cashew kernel stored at 70, 80 and 90% .... method of American Oil Chemist Society AOCS (1978) using glacial ..... changes occur and volatile products are formed that are.
Multivariate and semiparametric kernel regression
Härdle, Wolfgang; Müller, Marlene
1997-01-01
The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...
DEFF Research Database (Denmark)
Barndorff-Nielsen, Ole E.
The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....
Results from ORNL Characterization of Nominal 350 (micro)m NUCO Kernels from the BWXT 59344 batch
International Nuclear Information System (INIS)
Hunn, John D.; Kercher, Andrew K.; Menchhofer, Paul A.; Price, Jeffery R.
2005-01-01
This document is a compilation of characterization data obtained on nominal 350 (micro)m natural enrichment uranium oxide/uranium carbide kernels (NUCO) produced by BWXT for the Advanced Gas Reactor Fuel Development and Qualification Program. These kernels were produced as part of a development effort at BWXT to address issues involving forming and heat treatment and were shipped to ORNL for additional characterization and for coating tests. The kernels were identified as G73N-NU-59344. 250 grams were shipped to ORNL. Size, shape, and microstructural analysis was performed. These kernels were preceded by G73B-NU-69300 and G73B-NU-69301, which were kernels produced and delivered to ORNL earlier in the development phase. Characterization of the kernels from G73B-NU-69300 was summarized in ORNL/CF-04/07 'Results from ORNL Characterization of Nominal 350 (micro)m NUCO Kernels from the BWXT 69300 composite'.
NLIE of Dirichlet sine-Gordon model for boundary bound states
International Nuclear Information System (INIS)
Ahn, Changrim; Bajnok, Zoltan; Palla, Laszlo; Ravanini, Francesco
2008-01-01
We investigate boundary bound states of sine-Gordon model on the finite-size strip with Dirichlet boundary conditions. For the purpose we derive the nonlinear integral equation (NLIE) for the boundary excited states from the Bethe ansatz equation of the inhomogeneous XXZ spin 1/2 chain with boundary imaginary roots discovered by Saleur and Skorik. Taking a large volume (IR) limit we calculate boundary energies, boundary reflection factors and boundary Luescher corrections and compare with the excited boundary states of the Dirichlet sine-Gordon model first considered by Dorey and Mattsson. We also consider the short distance limit and relate the IR scattering data with that of the UV conformal field theory
On selecting a prior for the precision parameter of Dirichlet process mixture models
Dorazio, R.M.
2009-01-01
In hierarchical mixture models the Dirichlet process is used to specify latent patterns of heterogeneity, particularly when the distribution of latent parameters is thought to be clustered (multimodal). The parameters of a Dirichlet process include a precision parameter ?? and a base probability measure G0. In problems where ?? is unknown and must be estimated, inferences about the level of clustering can be sensitive to the choice of prior assumed for ??. In this paper an approach is developed for computing a prior for the precision parameter ?? that can be used in the presence or absence of prior information about the level of clustering. This approach is illustrated in an analysis of counts of stream fishes. The results of this fully Bayesian analysis are compared with an empirical Bayes analysis of the same data and with a Bayesian analysis based on an alternative commonly used prior.
The Dirichlet problem with L2-boundary data for elliptic linear equations
Chabrowski, Jan
1991-01-01
The Dirichlet problem has a very long history in mathematics and its importance in partial differential equations, harmonic analysis, potential theory and the applied sciences is well-known. In the last decade the Dirichlet problem with L2-boundary data has attracted the attention of several mathematicians. The significant features of this recent research are the use of weighted Sobolev spaces, existence results for elliptic equations under very weak regularity assumptions on coefficients, energy estimates involving L2-norm of a boundary data and the construction of a space larger than the usual Sobolev space W1,2 such that every L2-function on the boundary of a given set is the trace of a suitable element of this space. The book gives a concise account of main aspects of these recent developments and is intended for researchers and graduate students. Some basic knowledge of Sobolev spaces and measure theory is required.
A second eigenvalue bound for the Dirichlet Schrodinger equation wtih a radially symmetric potential
Directory of Open Access Journals (Sweden)
Craig Haile
2000-01-01
Full Text Available We study the time-independent Schrodinger equation with radially symmetric potential $k|x|^alpha$, $k ge 0$, $k in mathbb{R}, alpha ge 2$ on a bounded domain $Omega$ in $mathbb{R}^n$, $(n ge 2$ with Dirichlet boundary conditions. In particular, we compare the eigenvalue $lambda_2(Omega$ of the operator $-Delta + k |x|^alpha $ on $Omega$ with the eigenvalue $lambda_2(S_1$ of the same operator $-Delta +kr^alpha$ on a ball $S_1$, where $S_1$ has radius such that the first eigenvalues are the same ($lambda_1(Omega = lambda_1(S_1$. The main result is to show $lambda_2(Omega le lambda_2(S_1$. We also give an extension of the main result to the case of a more general elliptic eigenvalue problem on a bounded domain $Omega$ with Dirichlet boundary conditions.
Directory of Open Access Journals (Sweden)
Samira Hosseini
Full Text Available Abstract One of the main drawbacks of Element Free Galerkin (EFG method is its dependence on moving least square shape functions which don’t satisfy the Kronecker Delta property, so in this method it’s not possible to apply Dirichlet boundary conditions directly. The aim of the present paper is to discuss different aspects of three widely used methods of applying Dirichlet boundary conditions in EFG method, called Lagrange multipliers, penalty method, and coupling with finite element method. Numerical simulations are presented to compare the results of these methods form the perspective of accuracy, convergence and computational expense. These methods have been implemented in an object oriented programing environment, called INSANE, and the results are presented and compared with the analytical solutions.
Energy Technology Data Exchange (ETDEWEB)
Mejri, Youssef, E-mail: josef-bizert@hotmail.fr [Aix Marseille Universite, Toulon Universite, CNRS, CPT, Marseille (France); Dép. des Mathématiques, Faculté des Sciences de Bizerte, 7021 Jarzouna (Tunisia); Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT BP 37, Le Belvedere, 1002 Tunis (Tunisia)
2016-06-15
In this article, we study the boundary inverse problem of determining the aligned magnetic field appearing in the magnetic Schrödinger equation in a periodic quantum cylindrical waveguide, by knowledge of the Dirichlet-to-Neumann map. We prove a Hölder stability estimate with respect to the Dirichlet-to-Neumann map, by means of the geometrical optics solutions of the magnetic Schrödinger equation.
Directory of Open Access Journals (Sweden)
Qingkai Kong
2012-02-01
Full Text Available In this paper, we study the existence and multiplicity of positive solutions of a class of nonlinear fractional boundary value problems with Dirichlet boundary conditions. By applying the fixed point theory on cones we establish a series of criteria for the existence of one, two, any arbitrary finite number, and an infinite number of positive solutions. A criterion for the nonexistence of positive solutions is also derived. Several examples are given for demonstration.
Differential calculus for Dirichlet forms: The measure-valued gradient preserved by image
Bouleau, Nicolas
2005-01-01
In order to develop a differential calculus for error propagation we study local Dirichlet forms on probability spaces with square field operator $\\Gamma$ -- i.e. error structures -- and we are looking for an object related to $\\Gamma$ which is linear and with a good behaviour by images. For this we introduce a new notion called the measure valued gradient which is a randomized square root of $\\Gamma$. The exposition begins with inspecting some natural notions candidate to solve the problem b...
Adaptive Convergence Rates of a Dirichlet Process Mixture of Multivariate Normals
Tokdar, Surya T.
2011-01-01
It is shown that a simple Dirichlet process mixture of multivariate normals offers Bayesian density estimation with adaptive posterior convergence rates. Toward this, a novel sieve for non-parametric mixture densities is explored, and its rate adaptability to various smoothness classes of densities in arbitrary dimension is demonstrated. This sieve construction is expected to offer a substantial technical advancement in studying Bayesian non-parametric mixture models based on stick-breaking p...
Analyses of Developmental Rate Isomorphy in Ectotherms: Introducing the Dirichlet Regression.
Directory of Open Access Journals (Sweden)
David S Boukal
Full Text Available Temperature drives development in insects and other ectotherms because their metabolic rate and growth depends directly on thermal conditions. However, relative durations of successive ontogenetic stages often remain nearly constant across a substantial range of temperatures. This pattern, termed 'developmental rate isomorphy' (DRI in insects, appears to be widespread and reported departures from DRI are generally very small. We show that these conclusions may be due to the caveats hidden in the statistical methods currently used to study DRI. Because the DRI concept is inherently based on proportional data, we propose that Dirichlet regression applied to individual-level data is an appropriate statistical method to critically assess DRI. As a case study we analyze data on five aquatic and four terrestrial insect species. We find that results obtained by Dirichlet regression are consistent with DRI violation in at least eight of the studied species, although standard analysis detects significant departure from DRI in only four of them. Moreover, the departures from DRI detected by Dirichlet regression are consistently much larger than previously reported. The proposed framework can also be used to infer whether observed departures from DRI reflect life history adaptations to size- or stage-dependent effects of varying temperature. Our results indicate that the concept of DRI in insects and other ectotherms should be critically re-evaluated and put in a wider context, including the concept of 'equiproportional development' developed for copepods.
Influence Function and Robust Variant of Kernel Canonical Correlation Analysis
Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping
2017-01-01
Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...
Kernel versions of some orthogonal transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Model Selection in Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter
Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...
Analysis of Advanced Fuel Kernel Technology
International Nuclear Information System (INIS)
Oh, Seung Chul; Jeong, Kyung Chai; Kim, Yeon Ku; Kim, Young Min; Kim, Woong Ki; Lee, Young Woo; Cho, Moon Sung
2010-03-01
The reference fuel for prismatic reactor concepts is based on use of an LEU UCO TRISO fissile particle. This fuel form was selected in the early 1980s for large high-temperature gas-cooled reactor (HTGR) concepts using LEU, and the selection was reconfirmed for modular designs in the mid-1980s. Limited existing irradiation data on LEU UCO TRISO fuel indicate the need for a substantial improvement in performance with regard to in-pile gaseous fission product release. Existing accident testing data on LEU UCO TRISO fuel are extremely limited, but it is generally expected that performance would be similar to that of LEU UO 2 TRISO fuel if performance under irradiation were successfully improved. Initial HTGR fuel technology was based on carbide fuel forms. In the early 1980s, as HTGR technology was transitioning from high-enriched uranium (HEU) fuel to LEU fuel. An initial effort focused on LEU prismatic design for large HTGRs resulted in the selection of UCO kernels for the fissile particles and thorium oxide (ThO 2 ) for the fertile particles. The primary reason for selection of the UCO kernel over UO 2 was reduced CO pressure, allowing higher burnup for equivalent coating thicknesses and reduced potential for kernel migration, an important failure mechanism in earlier fuels. A subsequent assessment in the mid-1980s considering modular HTGR concepts again reached agreement on UCO for the fissile particle for a prismatic design. In the early 1990s, plant cost-reduction studies led to a decision to change the fertile material from thorium to natural uranium, primarily because of a lower long-term decay heat level for the natural uranium fissile particles. Ongoing economic optimization in combination with anticipated capabilities of the UCO particles resulted in peak fissile particle burnup projection of 26% FIMA in steam cycle and gas turbine concepts
Integral equations with contrasting kernels
Directory of Open Access Journals (Sweden)
Theodore Burton
2008-01-01
Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.
Yan, Yan; Keyes, David E.
2015-01-01
and require the numerical continuation technique applied on regularization parameters. We believe our solution strategy is general and can be applied to other large-scale optimal control problems which involve multiphysics processes and require smooth
Kernel learning algorithms for face recognition
Li, Jun-Bao; Pan, Jeng-Shyang
2013-01-01
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Model selection for Gaussian kernel PCA denoising
DEFF Research Database (Denmark)
Jørgensen, Kasper Winther; Hansen, Lars Kai
2012-01-01
We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...
RTOS kernel in portable electrocardiograph
Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.
2011-12-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
RTOS kernel in portable electrocardiograph
International Nuclear Information System (INIS)
Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A
2011-01-01
This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.
DEFF Research Database (Denmark)
Walder, Christian; Henao, Ricardo; Mørup, Morten
We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....
Model selection in kernel ridge regression
DEFF Research Database (Denmark)
Exterkate, Peter
2013-01-01
Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...
Multiple Kernel Learning with Data Augmentation
2016-11-22
JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to
A kernel version of multivariate alteration detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2013-01-01
Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....
Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.
2012-01-01
The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an
International Nuclear Information System (INIS)
Burnett, R.C.; Hankart, L.J.; Horsley, G.W.
1965-05-01
The development of methods of producing spheroidal sintered porous kernels of hyperstoichiometric thorium/uranium dicarbide solid solution from thorium/uranium monocarbide/carbon and thoria/urania/carbon powder mixes is described. The work has involved study of (i) Methods of preparing green kernels from UC/Th/C powder mixes using the rotary sieve technique. (ii) Methods of producing green kernels from UO2/Th02/C powder mixes using the planetary mill technique. (iii) The conversion by appropriate heat treatment of green kernels produced by both routes to sintered porous kernels of thorium/uranium carbide. (iv) The efficiency of the processes. (author)
Bayesian inferences of the thermal properties of a wall using temperature and heat flux measurements
Iglesias, Marco; Sawlan, Zaid A; Scavino, Marco; Tempone, Raul; Wood, Christopher
2017-01-01
and heat flux over extended time periods. The one-dimensional heat equation with unknown Dirichlet boundary conditions is used to model the heat transfer process through the wall. In Ruggeri et al. (2017), it was assessed the uncertainty about the thermal
Complex use of cottonseed kernels
Energy Technology Data Exchange (ETDEWEB)
Glushenkova, A I
1977-01-01
A review with 41 references is made on the manufacture of oil, protein, and other products from cottonseed, the effects of gossypol on protein yield and quality and technology of gossypol removal. A process eliminating thermal treatment of the kernels and permitting the production of oil, proteins, phytin, gossypol, sugar, sterols, phosphatides, tocopherols, and residual shells and baggase is described.
Kernel regression with functional response
Ferraty, Frédéric; Laksaci, Ali; Tadj, Amel; Vieu, Philippe
2011-01-01
We consider kernel regression estimate when both the response variable and the explanatory one are functional. The rates of uniform almost complete convergence are stated as function of the small ball probability of the predictor and as function of the entropy of the set on which uniformity is obtained.
GRIM : Leveraging GPUs for Kernel integrity monitoring
Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris
2016-01-01
Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious
Paramecium: An Extensible Object-Based Kernel
van Doorn, L.; Homburg, P.; Tanenbaum, A.S.
1995-01-01
In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection
Local Observed-Score Kernel Equating
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
Veto-Consensus Multiple Kernel Learning
Zhou, Y.; Hu, N.; Spanos, C.J.
2016-01-01
We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The
On Dirichlet-to-Neumann Maps and Some Applications to Modified Fredholm Determinants
Gesztesy, Fritz; Mitrea, Marius; Zinchenko, Maxim
2010-01-01
We consider Dirichlet-to-Neumann maps associated with (not necessarily self-adjoint) Schrodinger operators in $L^2(\\Omega; d^n x)$, $n=2,3$, where $\\Omega$ is an open set with a compact, nonempty boundary satisfying certain regularity conditions. As an application we describe a reduction of a certain ratio of modified Fredholm perturbation determinants associated with operators in $L^2(\\Omega; d^n x)$ to modified Fredholm perturbation determinants associated with operators in $L^2(\\partial\\Om...
DEFF Research Database (Denmark)
Pedersen, Michael
1991-01-01
The stabilization problems for parabolic and hyperbolic partial differential equations with Dirichlet boundary condition are considered. The systems are stabilized by a boundary feedback in(1) The operator equation,(2) The boundary condition,(3) Both the operator equation and the boundary condition...... turns out to be a shortcut to some of the stabilization results of Lasiecka and Triggiani in [J. Differential Equations, 47 (1983), pp. 245-272], [SIAM J. Control Optim., 21(1983), pp. 766-802], and [Appl. Math. Optim., 8(1981), pp. 1-37], and it illuminates to some extent how a change of boundary...
Sheng, Yin; Zhang, Hao; Zeng, Zhigang
2017-10-01
This paper is concerned with synchronization for a class of reaction-diffusion neural networks with Dirichlet boundary conditions and infinite discrete time-varying delays. By utilizing theories of partial differential equations, Green's formula, inequality techniques, and the concept of comparison, algebraic criteria are presented to guarantee master-slave synchronization of the underlying reaction-diffusion neural networks via a designed controller. Additionally, sufficient conditions on exponential synchronization of reaction-diffusion neural networks with finite time-varying delays are established. The proposed criteria herein enhance and generalize some published ones. Three numerical examples are presented to substantiate the validity and merits of the obtained theoretical results.
Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan
2018-05-01
With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.
Wigner functions defined with Laplace transform kernels.
Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George
2011-10-24
We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America
PERI - auto-tuning memory-intensive kernels for multicore
International Nuclear Information System (INIS)
Williams, S; Carter, J; Oliker, L; Shalf, J; Yelick, K; Bailey, D; Datta, K
2008-01-01
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4x improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications
PERI - Auto-tuning Memory Intensive Kernels for Multicore
Energy Technology Data Exchange (ETDEWEB)
Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H
2008-06-24
We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.
Credit scoring analysis using kernel discriminant
Widiharih, T.; Mukid, M. A.; Mustafid
2018-05-01
Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.
Testing Infrastructure for Operating System Kernel Development
DEFF Research Database (Denmark)
Walter, Maxwell; Karlsson, Sven
2014-01-01
Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....
Kernel parameter dependence in spatial factor analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....
Migration of the ThO2 kernels under the influence of a temperature gradient
International Nuclear Information System (INIS)
Smith, C.L.
1977-01-01
Biso-coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during high-temperature gas-cooled reactor (HTGR) operation. Thorium dioxide kernel migration has been studied as a function of temperature (1290 to 1705 0 C) (1563 to 1978 K) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid-state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period), followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions
Migration of ThO2 kernels under the influence of a temperature gradient
International Nuclear Information System (INIS)
Smith, C.L.
1976-11-01
BISO coated ThO 2 fertile fuel kernels will migrate up the thermal gradients imposed across coated particles during HTGR operation. Thorium dioxide kernel migration has been studied as a function of temperature (1300 to 1700 0 C) and ThO 2 kernel burnup (0.9 to 5.8 percent FIMA) in out-of-pile, postirradiation thermal gradient heating experiments. The studies were conducted to obtain descriptions of migration rates that will be used in core design studies to evaluate the impact of ThO 2 migration on fertile fuel performance in an operating HTGR and to define characteristics needed by any comprehensive model describing ThO 2 kernel migration. The kinetics data generated in these postirradiation studies are consistent with in-pile data collected by investigators at Oak Ridge National Laboratory, which supports use of the more precise postirradiation heating results in HTGR core design studies. Observations of intergranular carbon deposits on the cool side of migrating kernels support the assumption that the kinetics of kernel migration are controlled by solid state diffusion within irradiated ThO 2 kernels. The migration is characterized by a period of no migration (incubation period) followed by migration at the equilibrium rate for ThO 2 . The incubation period decreases with increasing temperature and kernel burnup. The improved understanding of the kinetics of ThO 2 kernel migration provided by this work will contribute to an optimization of HTGR core design and an increased confidence in fuel performance predictions
An Inverse Eigenvalue Problem for a Vibrating String with Two Dirichlet Spectra
Rundell, William
2013-04-23
A classical inverse problem is "can you hear the density of a string clamped at both ends?" The mathematical model gives rise to an inverse Sturm-Liouville problem for the unknown density ñ, and it is well known that the answer is negative: the Dirichlet spectrum from the clamped end-point conditions is insufficient. There are many known ways to add additional information to gain a positive answer, and these include changing one of the boundary conditions and recomputing the spectrum or giving the energy in each eigenmode-the so-called norming constants. We make the assumption that neither of these changes are possible. Instead we will add known mass-densities to the string in a way we can prescribe and remeasure the Dirichlet spectrum. We will not be able to answer the uniqueness question in its most general form, but will give some insight to what "added masses" should be chosen and how this can lead to a reconstruction of the original string density. © 2013 Society for Industrial and Applied Mathematics.
Validation of Born Traveltime Kernels
Baig, A. M.; Dahlen, F. A.; Hung, S.
2001-12-01
Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.
RKRD: Runtime Kernel Rootkit Detection
Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.
In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.
Kernel Bayesian ART and ARTMAP.
Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan
2018-02-01
Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Jamet, P [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires
1967-07-01
This report gives a general presentation of barrier theory for finite difference operators, with its applications to some boundary value problems. (author) [French] Ce rapport est un expose synthetique de la theorie des barrieres pour les operateurs aux differences finies et ses applications a certaines classes de problemes lineaires elliptiques du 'type de Dirichlet'. (auteur)
Theory of reproducing kernels and applications
Saitoh, Saburou
2016-01-01
This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...
On the 2m 2m 2m-th power mean of Dirichlet L-functions with the ...
Indian Academy of Sciences (India)
The main purpose is to use the analytic method to study the 2 m -th power mean of Dirichlet -functions with the weight of the general trigonometric sums and give ... School of Science, Xi'an Jiaotong University, Xi'an, Shaanxi 710049, People's Republic of China; The School of Electronic and Information Engineering, Xi'an ...
B.J. Meulenbroek (Bernard); U. M. Ebert (Ute); L. Schäfer
2005-01-01
textabstractThe dynamics of ionization fronts that generate a conducting body, are in simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We
Meulenbroek, B.; Ebert, U.; Schäfer, L.
2005-01-01
The dynamics of ionization fronts that generate a conducting body are in the simplest approximation equivalent to viscous fingering without regularization. Going beyond this approximation, we suggest that ionization fronts can be modeled by a mixed Dirichlet-Neumann boundary condition. We derive
International Nuclear Information System (INIS)
Lu Junguo
2008-01-01
In this paper, the global exponential stability and periodicity for a class of reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions are addressed by constructing suitable Lyapunov functionals and utilizing some inequality techniques. We first prove global exponential converge to 0 of the difference between any two solutions of the original reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions, the existence and uniqueness of equilibrium is the direct results of this procedure. This approach is different from the usually used one where the existence, uniqueness of equilibrium and stability are proved in two separate steps. Furthermore, we prove periodicity of the reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions. Sufficient conditions ensuring the global exponential stability and the existence of periodic oscillatory solutions for the reaction-diffusion delayed recurrent neural networks with Dirichlet boundary conditions are given. These conditions are easy to check and have important leading significance in the design and application of reaction-diffusion recurrent neural networks with delays. Finally, two numerical examples are given to show the effectiveness of the obtained results
Energy Technology Data Exchange (ETDEWEB)
Kaikina, Elena I., E-mail: ekaikina@matmor.unam.mx [Centro de Ciencias Matemáticas, UNAM Campus Morelia, AP 61-3 (Xangari), Morelia CP 58089, Michoacán (Mexico)
2013-11-15
We consider the inhomogeneous Dirichlet initial-boundary value problem for the nonlinear Schrödinger equation, formulated on a half-line. We study traditionally important problems of the theory of nonlinear partial differential equations, such as global in time existence of solutions to the initial-boundary value problem and the asymptotic behavior of solutions for large time.
International Nuclear Information System (INIS)
Kaikina, Elena I.
2013-01-01
We consider the inhomogeneous Dirichlet initial-boundary value problem for the nonlinear Schrödinger equation, formulated on a half-line. We study traditionally important problems of the theory of nonlinear partial differential equations, such as global in time existence of solutions to the initial-boundary value problem and the asymptotic behavior of solutions for large time
Directory of Open Access Journals (Sweden)
Khaleghi Moghadam Mohsen
2017-08-01
Full Text Available Triple solutions are obtained for a discrete problem involving a nonlinearly perturbed one-dimensional p(k-Laplacian operator and satisfying Dirichlet boundary conditions. The methods for existence rely on a Ricceri-local minimum theorem for differentiable functionals. Several examples are included to illustrate the main results.
Directory of Open Access Journals (Sweden)
Tomasz S. Zabawa
2005-01-01
Full Text Available The Dirichlet problem for an infinite weakly coupled system of semilinear differential-functional equations of elliptic type is considered. It is shown the existence of solutions to this problem. The result is based on Chaplygin's method of lower and upper functions.
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-02-12
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Convergence of barycentric coordinates to barycentric kernels
Kosinka, Jiří
2016-01-01
We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.
Kernel principal component analysis for change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Morton, J.C.
2008-01-01
region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....
The Hardy inequality and the heat flow in curved wedges
Czech Academy of Sciences Publication Activity Database
Krejčiřík, David
2016-01-01
Roč. 73, č. 2 (2016), s. 91-113 ISSN 0032-5155 R&D Projects: GA ČR(CZ) GA14-06818S Institutional support: RVO:61389005 Keywords : Hardy inequality * heat equation * large-time behaviour * curved wedges * Dirichlet Laplacian * conical singularities * Brownian motion * subcriticality Subject RIV: BE - Theoretical Physics Impact factor: 0.735, year: 2016
Partial Deconvolution with Inaccurate Blur Kernel.
Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei
2017-10-17
Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning
Investigating brand loyalty using Dirichlet benchmarks: The case of light dairy products
DEFF Research Database (Denmark)
Krystallis, Athanasios; Chrysochou, Polymeros
constitutes an indication of this success. The present work aims to investigate consumer loyalty to light dairy (milk and yoghurt) brands. First, basic Brand Performance Measures (BPMs) are empirically estimated to describe market structure of the dairy categories under investigation. Then, the Dirichlet...... model (Ehrenberg et al., 2004) was fitted to the empirical data, pointing out to theoretical category loyalty measures. Grouping of the dairy categories under investigation according to their purchase frequency and brand penetration then follows. The work concludes with the overall estimation...... of consumer loyalty to the light dairy sub-category compared to other sub-categories that exist within the wider dairy categories under investigation. The total market share of light brands is found to be directly comparable with that of full fat brands. The importance of the light sub-category is indicated...
Directory of Open Access Journals (Sweden)
Marco Biroli
2007-12-01
Full Text Available We consider a measure valued map α(u deﬁned on D where D is a subspace of L^p(X,m with X a locally compact Hausdorff topological space with a distance under which it is a space of homogeneous type. Under assumptions of convexity, Gateaux differentiability and other assumptions on α which generalize the properties of the energy measure of a Dirichlet form, we prove the Holder continuity of the local solution u of the problem ∫Xµ(u,v(dx = 0 for each v belonging to a suitable space of test functions, where µ(u,v =< α'(u,v >.
A three-dimensional Dirichlet-to-Neumann operator for water waves over topography
Andrade, D.; Nachbin, A.
2018-06-01
Surface water waves are considered propagating over highly variable non-smooth topographies. For this three dimensional problem a Dirichlet-to-Neumann (DtN) operator is constructed reducing the numerical modeling and evolution to the two dimensional free surface. The corresponding Fourier-type operator is defined through a matrix decomposition. The topographic component of the decomposition requires special care and a Galerkin method is provided accordingly. One dimensional numerical simulations, along the free surface, validate the DtN formulation in the presence of a large amplitude, rapidly varying topography. An alternative, conformal mapping based, method is used for benchmarking. A two dimensional simulation in the presence of a Luneburg lens (a particular submerged mound) illustrates the accurate performance of the three dimensional DtN operator.
Tangent unit-vector fields: Nonabelian homotopy invariants and the Dirichlet energy
Majumdar, Apala
2009-10-01
Let O be a closed geodesic polygon in S2. Maps from O into S2 are said to satisfy tangent boundary conditions if the edges of O are mapped into the geodesics which contain them. Taking O to be an octant of S2, we evaluate the infimum Dirichlet energy, E (H), for continuous tangent maps of arbitrary homotopy type H. The expression for E (H) involves a topological invariant - the spelling length - associated with the (nonabelian) fundamental group of the n-times punctured two-sphere, π1 (S2 - {s1, ..., sn}, *). These results have applications for the theoretical modelling of nematic liquid crystal devices. To cite this article: A. Majumdar et al., C. R. Acad. Sci. Paris, Ser. I 347 (2009). © 2009 Académie des sciences.
Imitation learning of Non-Linear Point-to-Point Robot Motions using Dirichlet Processes
DEFF Research Database (Denmark)
Krüger, Volker; Tikhanoff, Vadim; Natale, Lorenzo
2012-01-01
In this paper we discuss the use of the infinite Gaussian mixture model and Dirichlet processes for learning robot movements from demonstrations. Starting point of this work is an earlier paper where the authors learn a non-linear dynamic robot movement model from a small number of observations....... The model in that work is learned using a classical finite Gaussian mixture model (FGMM) where the Gaussian mixtures are appropriately constrained. The problem with this approach is that one needs to make a good guess for how many mixtures the FGMM should use. In this work, we generalize this approach...... our algorithm on the same data that was used in [5], where the authors use motion capture devices to record the demonstrations. As further validation we test our approach on novel data acquired on our iCub in a different demonstration scenario in which the robot is physically driven by the human...
Directory of Open Access Journals (Sweden)
I. C. Ramos
2015-10-01
Full Text Available We present the adaptation to non-free boundary conditions of a pseudospectral method based on the (complex Fourier transform. The method is applied to the numerical integration of the Oberbeck-Boussinesq equations in a Rayleigh-Bénard cell with no-slip boundary conditions for velocity and Dirichlet boundary conditions for temperature. We show the first results of a 2D numerical simulation of dry air convection at high Rayleigh number (. These results are the basis for the later study, by the same method, of wet convection in a solar still. Received: 20 Novembre 2014, Accepted: 15 September 2015; Edited by: C. A. Condat, G. J. Sibona; DOI:http://dx.doi.org/10.4279/PIP.070015 Cite as: I C Ramos, C B Briozzo, Papers in Physics 7, 070015 (2015
Latent Dirichlet Allocation (LDA) Model and kNN Algorithm to Classify Research Project Selection
Safi’ie, M. A.; Utami, E.; Fatta, H. A.
2018-03-01
Universitas Sebelas Maret has a teaching staff more than 1500 people, and one of its tasks is to carry out research. In the other side, the funding support for research and service is limited, so there is need to be evaluated to determine the Research proposal submission and devotion on society (P2M). At the selection stage, research proposal documents are collected as unstructured data and the data stored is very large. To extract information contained in the documents therein required text mining technology. This technology applied to gain knowledge to the documents by automating the information extraction. In this articles we use Latent Dirichlet Allocation (LDA) to the documents as a model in feature extraction process, to get terms that represent its documents. Hereafter we use k-Nearest Neighbour (kNN) algorithm to classify the documents based on its terms.
Energy Technology Data Exchange (ETDEWEB)
Andrzejewski, D; Zhu, X; Craven, M; Recht, B
2011-01-18
Topic models have been used successfully for a variety of problems, often in the form of application-specific extensions of the basic Latent Dirichlet Allocation (LDA) model. Because deriving these new models in order to encode domain knowledge can be difficult and time-consuming, we propose the Fold-all model, which allows the user to specify general domain knowledge in First-Order Logic (FOL). However, combining topic modeling with FOL can result in inference problems beyond the capabilities of existing techniques. We have therefore developed a scalable inference technique using stochastic gradient descent which may also be useful to the Markov Logic Network (MLN) research community. Experiments demonstrate the expressive power of Fold-all, as well as the scalability of our proposed inference method.
Predictive Distribution of the Dirichlet Mixture Model by the Local Variational Inference Method
DEFF Research Database (Denmark)
Ma, Zhanyu; Leijon, Arne; Tan, Zheng-Hua
2014-01-01
the predictive likelihood of the new upcoming data, especially when the amount of training data is small. The Bayesian estimation of a Dirichlet mixture model (DMM) is, in general, not analytically tractable. In our previous work, we have proposed a global variational inference-based method for approximately...... calculating the posterior distributions of the parameters in the DMM analytically. In this paper, we extend our previous study for the DMM and propose an algorithm to calculate the predictive distribution of the DMM with the local variational inference (LVI) method. The true predictive distribution of the DMM...... is analytically intractable. By considering the concave property of the multivariate inverse beta function, we introduce an upper-bound to the true predictive distribution. As the global minimum of this upper-bound exists, the problem is reduced to seek an approximation to the true predictive distribution...
Filippi, Sarah; Holmes, Chris C; Nieto-Barajas, Luis E
2016-11-16
In this article we propose novel Bayesian nonparametric methods using Dirichlet Process Mixture (DPM) models for detecting pairwise dependence between random variables while accounting for uncertainty in the form of the underlying distributions. A key criteria is that the procedures should scale to large data sets. In this regard we find that the formal calculation of the Bayes factor for a dependent-vs.-independent DPM joint probability measure is not feasible computationally. To address this we present Bayesian diagnostic measures for characterising evidence against a "null model" of pairwise independence. In simulation studies, as well as for a real data analysis, we show that our approach provides a useful tool for the exploratory nonparametric Bayesian analysis of large multivariate data sets.
Dynamic classification of fetal heart rates by hierarchical Dirichlet process mixture models.
Directory of Open Access Journals (Sweden)
Kezi Yu
Full Text Available In this paper, we propose an application of non-parametric Bayesian (NPB models for classification of fetal heart rate (FHR recordings. More specifically, we propose models that are used to differentiate between FHR recordings that are from fetuses with or without adverse outcomes. In our work, we rely on models based on hierarchical Dirichlet processes (HDP and the Chinese restaurant process with finite capacity (CRFC. Two mixture models were inferred from real recordings, one that represents healthy and another, non-healthy fetuses. The models were then used to classify new recordings and provide the probability of the fetus being healthy. First, we compared the classification performance of the HDP models with that of support vector machines on real data and concluded that the HDP models achieved better performance. Then we demonstrated the use of mixture models based on CRFC for dynamic classification of the performance of (FHR recordings in a real-time setting.
Ding, Xiao-Li; Nieto, Juan J.
2017-11-01
In this paper, we consider the analytical solutions of coupling fractional partial differential equations (FPDEs) with Dirichlet boundary conditions on a finite domain. Firstly, the method of successive approximations is used to obtain the analytical solutions of coupling multi-term time fractional ordinary differential equations. Then, the technique of spectral representation of the fractional Laplacian operator is used to convert the coupling FPDEs to the coupling multi-term time fractional ordinary differential equations. By applying the obtained analytical solutions to the resulting multi-term time fractional ordinary differential equations, the desired analytical solutions of the coupling FPDEs are given. Our results are applied to derive the analytical solutions of some special cases to demonstrate their applicability.
On the Shape Sensitivity of the First Dirichlet Eigenvalue for Two-Phase Problems
International Nuclear Information System (INIS)
Dambrine, M.; Kateb, D.
2011-01-01
We consider a two-phase problem in thermal conductivity: inclusions filled with a material of conductivity σ 1 are layered in a body of conductivity σ 2 . We address the shape sensitivity of the first eigenvalue associated with Dirichlet boundary conditions when both the boundaries of the inclusions and the body can be modified. We prove a differentiability result and provide the expressions of the first and second order derivatives. We apply the results to the optimal design of an insulated body. We prove the stability of the optimal design thanks to a second order analysis. We also continue the study of an extremal eigenvalue problem for a two-phase conductor in a ball initiated by Conca et al. (Appl. Math. Optim. 60(2):173-184, 2009) and pursued in Conca et al. (CANUM 2008, ESAIM Proc., vol. 27, pp. 311-321, EDP Sci., Les Ulis, 2009).
Process for producing metal oxide kernels and kernels so obtained
International Nuclear Information System (INIS)
Lelievre, Bernard; Feugier, Andre.
1974-01-01
The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr
Directory of Open Access Journals (Sweden)
Daniel Ting
2010-04-01
Full Text Available Distributions of the backbone dihedral angles of proteins have been studied for over 40 years. While many statistical analyses have been presented, only a handful of probability densities are publicly available for use in structure validation and structure prediction methods. The available distributions differ in a number of important ways, which determine their usefulness for various purposes. These include: 1 input data size and criteria for structure inclusion (resolution, R-factor, etc.; 2 filtering of suspect conformations and outliers using B-factors or other features; 3 secondary structure of input data (e.g., whether helix and sheet are included; whether beta turns are included; 4 the method used for determining probability densities ranging from simple histograms to modern nonparametric density estimation; and 5 whether they include nearest neighbor effects on the distribution of conformations in different regions of the Ramachandran map. In this work, Ramachandran probability distributions are presented for residues in protein loops from a high-resolution data set with filtering based on calculated electron densities. Distributions for all 20 amino acids (with cis and trans proline treated separately have been determined, as well as 420 left-neighbor and 420 right-neighbor dependent distributions. The neighbor-independent and neighbor-dependent probability densities have been accurately estimated using Bayesian nonparametric statistical analysis based on the Dirichlet process. In particular, we used hierarchical Dirichlet process priors, which allow sharing of information between densities for a particular residue type and different neighbor residue types. The resulting distributions are tested in a loop modeling benchmark with the program Rosetta, and are shown to improve protein loop conformation prediction significantly. The distributions are available at http://dunbrack.fccc.edu/hdp.
Hilbertian kernels and spline functions
Atteia, M
1992-01-01
In this monograph, which is an extensive study of Hilbertian approximation, the emphasis is placed on spline functions theory. The origin of the book was an effort to show that spline theory parallels Hilbertian Kernel theory, not only for splines derived from minimization of a quadratic functional but more generally for splines considered as piecewise functions type. Being as far as possible self-contained, the book may be used as a reference, with information about developments in linear approximation, convex optimization, mechanics and partial differential equations.
Dense Medium Machine Processing Method for Palm Kernel/ Shell ...
African Journals Online (AJOL)
ADOWIE PERE
Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge; Schuster, Gerard T.
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently
Ranking Support Vector Machine with Kernel Approximation
Directory of Open Access Journals (Sweden)
Kai Chen
2017-01-01
Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Sentiment classification with interpolated information diffusion kernels
Raaijmakers, S.
2007-01-01
Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of
Evolution kernel for the Dirac field
International Nuclear Information System (INIS)
Baaquie, B.E.
1982-06-01
The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)
Panel data specifications in nonparametric kernel regression
DEFF Research Database (Denmark)
Czekaj, Tomasz Gerard; Henningsen, Arne
parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...
Improving the Bandwidth Selection in Kernel Equating
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
Kernel Korner : The Linux keyboard driver
Brouwer, A.E.
1995-01-01
Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the
Metabolic network prediction through pairwise rational kernels.
Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian
2014-09-26
Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy
DIMM-SC: a Dirichlet mixture model for clustering droplet-based single cell transcriptomic data.
Sun, Zhe; Wang, Ting; Deng, Ke; Wang, Xiao-Feng; Lafyatis, Robert; Ding, Ying; Hu, Ming; Chen, Wei
2018-01-01
Single cell transcriptome sequencing (scRNA-Seq) has become a revolutionary tool to study cellular and molecular processes at single cell resolution. Among existing technologies, the recently developed droplet-based platform enables efficient parallel processing of thousands of single cells with direct counting of transcript copies using Unique Molecular Identifier (UMI). Despite the technology advances, statistical methods and computational tools are still lacking for analyzing droplet-based scRNA-Seq data. Particularly, model-based approaches for clustering large-scale single cell transcriptomic data are still under-explored. We developed DIMM-SC, a Dirichlet Mixture Model for clustering droplet-based Single Cell transcriptomic data. This approach explicitly models UMI count data from scRNA-Seq experiments and characterizes variations across different cell clusters via a Dirichlet mixture prior. We performed comprehensive simulations to evaluate DIMM-SC and compared it with existing clustering methods such as K-means, CellTree and Seurat. In addition, we analyzed public scRNA-Seq datasets with known cluster labels and in-house scRNA-Seq datasets from a study of systemic sclerosis with prior biological knowledge to benchmark and validate DIMM-SC. Both simulation studies and real data applications demonstrated that overall, DIMM-SC achieves substantially improved clustering accuracy and much lower clustering variability compared to other existing clustering methods. More importantly, as a model-based approach, DIMM-SC is able to quantify the clustering uncertainty for each single cell, facilitating rigorous statistical inference and biological interpretations, which are typically unavailable from existing clustering methods. DIMM-SC has been implemented in a user-friendly R package with a detailed tutorial available on www.pitt.edu/∼wec47/singlecell.html. wei.chen@chp.edu or hum@ccf.org. Supplementary data are available at Bioinformatics online. © The Author
Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization
Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin
2017-02-01
To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.
Putting Priors in Mixture Density Mercer Kernels
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2004-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.
Anisotropic hydrodynamics with a scalar collisional kernel
Almaalol, Dekrayat; Strickland, Michael
2018-04-01
Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.
Vlad, Marcel Ovidiu; Tsuchiya, Masa; Oefner, Peter; Ross, John
2002-01-01
We investigate the statistical properties of systems with random chemical composition and try to obtain a theoretical derivation of the self-similar Dirichlet distribution, which is used empirically in molecular biology, environmental chemistry, and geochemistry. We consider a system made up of many chemical species and assume that the statistical distribution of the abundance of each chemical species in the system is the result of a succession of a variable number of random dilution events, which can be described by using the renormalization-group theory. A Bayesian approach is used for evaluating the probability density of the chemical composition of the system in terms of the probability densities of the abundances of the different chemical species. We show that for large cascades of dilution events, the probability density of the composition vector of the system is given by a self-similar probability density of the Dirichlet type. We also give an alternative formal derivation for the Dirichlet law based on the maximum entropy approach, by assuming that the average values of the chemical potentials of different species, expressed in terms of molar fractions, are constant. Although the maximum entropy approach leads formally to the Dirichlet distribution, it does not clarify the physical origin of the Dirichlet statistics and has serious limitations. The random theory of dilution provides a physical picture for the emergence of Dirichlet statistics and makes it possible to investigate its validity range. We discuss the implications of our theory in molecular biology, geochemistry, and environmental science.
Partition function zeros for the one-dimensional ordered plasma in Dirichlet boundary conditions
International Nuclear Information System (INIS)
Roumeliotis, J.; Smith, E.R.
1992-01-01
The authors consider the grand canonical partition function for the ordered one-dimensional, two-component plasma at fugacity ζ in an applied electric field E with Dirichlet boundary conditions. The system has a phase transition from a low-coupling phase with equally spaced particles to a high-coupling phase with particles clustered into dipolar pairs. An exact expression for the partition function is developed. In zero applied field the zeros in the ζ plane occupy the imaginary axis from -i∞ to -iζ c and iζ c to i∞ for some ζ c . They also occupy the diamond shape of four straight lines from ±iζ c to ζ c and from ±iζ c to -ζ c . The fugacity ζ acts like a temperature or coupling variable. The symmetry-breaking field is the applied electric field E. A finite-size scaling representation for the partition in scaled coupling and scaled electric field is developed. It has standard mean field form. When the scaled coupling is real, the zeros in the scaled field lie on the imaginary axis and pinch the real scaled field axis as the scaled coupling increases. The scaled partition function considered as a function of two complex variables, scaled coupling and scaled field, has zeros on a two-dimensional surface in a domain of four real variables. A numerical discussion of some of the properties of this surface is presented
Smith, Keith; Ricaud, Benjamin; Shahid, Nauman; Rhodes, Stephen; Starr, John M.; Ibáñez, Augustin; Parra, Mario A.; Escudero, Javier; Vandergheynst, Pierre
2017-02-01
Visual short-term memory binding tasks are a promising early marker for Alzheimer’s disease (AD). To uncover functional deficits of AD in these tasks it is meaningful to first study unimpaired brain function. Electroencephalogram recordings were obtained from encoding and maintenance periods of tasks performed by healthy young volunteers. We probe the task’s transient physiological underpinnings by contrasting shape only (Shape) and shape-colour binding (Bind) conditions, displayed in the left and right sides of the screen, separately. Particularly, we introduce and implement a novel technique named Modular Dirichlet Energy (MDE) which allows robust and flexible analysis of the functional network with unprecedented temporal precision. We find that connectivity in the Bind condition is less integrated with the global network than in the Shape condition in occipital and frontal modules during the encoding period of the right screen condition. Using MDE we are able to discern driving effects in the occipital module between 100-140 ms, coinciding with the P100 visually evoked potential, followed by a driving effect in the frontal module between 140-180 ms, suggesting that the differences found constitute an information processing difference between these modules. This provides temporally precise information over a heterogeneous population in promising tasks for the detection of AD.
Latent Dirichlet Allocation (LDA) for Sentiment Analysis Toward Tourism Review in Indonesia
Putri, IR; Kusumaningrum, R.
2017-01-01
The tourism industry is one of foreign exchange sector, which has considerable potential development in Indonesia. Compared to other Southeast Asia countries such as Malaysia with 18 million tourists and Singapore 20 million tourists, Indonesia which is the largest Southeast Asia’s country have failed to attract higher tourist numbers compared to its regional peers. Indonesia only managed to attract 8,8 million foreign tourists in 2013, with the value of foreign tourists each year which is likely to decrease. Apart from the infrastructure problems, marketing and managing also form of obstacles for tourism growth. An evaluation and self-analysis should be done by the stakeholder to respond toward this problem and capture opportunities that related to tourism satisfaction from tourists review. Recently, one of technology to answer this problem only relying on the subjective of statistical data which collected by voting or grading from user randomly. So the result is still not to be accountable. Thus, we proposed sentiment analysis with probabilistic topic model using Latent Dirichlet Allocation (LDA) method to be applied for reading general tendency from tourist review into certain topics that can be classified toward positive and negative sentiment.
Hermitian boundary conditions at a Dirichlet singularity: the Marletta--Rozenblum model
International Nuclear Information System (INIS)
Berry, M V
2009-01-01
In domains B with smoothly-varying boundary conditions, points where wavefunctions are required to vanish were recently identified as 'Dirichlet singularities' (D points) where the Hamiltonian H does not define discrete eigenvalues and a scattering phase is undetermined (Berry and Dennis 2008 J. Phys. A: Math. Theor. 41 135203). This is explained (Marletta and Rozenblum 2009 J. Phys. A: Math. Theor. 42 125204) by the observation, illustrated with an exactly-solvable separable model, that a D point requires the specification of an additional parameter defining a family of self-adjoint extensions of H. Here the underlying theory is presented in an elementary way, and a D point is identified as a leak, through which current can flow into or out of B. Hermiticity seals the leak, ensuring that no current flows though the D point (as well as across the boundary of B). The solvable model is examined in detail for bound states, where B is a semidisk, and for wave reflections, where B is a half-plane. The quantization condition for a nonseparable billiard is obtained explicitly
International Nuclear Information System (INIS)
Coolen, F.P.A.
1997-01-01
This paper is intended to make researchers in reliability theory aware of a recently introduced Bayesian model with imprecise prior distributions for statistical inference on failure data, that can also be considered as a robust Bayesian model. The model consists of a multinomial distribution with Dirichlet priors, making the approach basically nonparametric. New results for the model are presented, related to right-censored observations, where estimation based on this model is closely related to the product-limit estimator, which is an important statistical method to deal with reliability or survival data including right-censored observations. As for the product-limit estimator, the model considered in this paper aims at not using any information other than that provided by observed data, but our model fits into the robust Bayesian context which has the advantage that all inferences can be based on probabilities or expectations, or bounds for probabilities or expectations. The model uses a finite partition of the time-axis, and as such it is also related to life-tables
Fan, Wentao; Bouguila, Nizar
2013-11-01
A large class of problems can be formulated in terms of the clustering process. Mixture models are an increasingly important tool in statistical pattern recognition and for analyzing and clustering complex data. Two challenging aspects that should be addressed when considering mixture models are how to choose between a set of plausible models and how to estimate the model's parameters. In this paper, we address both problems simultaneously within a unified online nonparametric Bayesian framework that we develop to learn a Dirichlet process mixture of Beta-Liouville distributions (i.e., an infinite Beta-Liouville mixture model). The proposed infinite model is used for the online modeling and clustering of proportional data for which the Beta-Liouville mixture has been shown to be effective. We propose a principled approach for approximating the intractable model's posterior distribution by a tractable one-which we develop-such that all the involved mixture's parameters can be estimated simultaneously and effectively in a closed form. This is done through variational inference that enjoys important advantages, such as handling of unobserved attributes and preventing under or overfitting; we explain that in detail. The effectiveness of the proposed work is evaluated on three challenging real applications, namely facial expression recognition, behavior modeling and recognition, and dynamic textures clustering.
Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm
African Journals Online (AJOL)
In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...
NLO corrections to the Kernel of the BKP-equations
Energy Technology Data Exchange (ETDEWEB)
Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)
2012-10-02
We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.
Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...
African Journals Online (AJOL)
This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...
African Journals Online (AJOL)
This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...
Kernel maximum autocorrelation factor and minimum noise fraction transformations
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2010-01-01
in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...
7 CFR 51.2296 - Three-fourths half kernel.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...
7 CFR 981.401 - Adjusted kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...
7 CFR 51.1403 - Kernel color classification.
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...
The Linux kernel as flexible product-line architecture
M. de Jonge (Merijn)
2002-01-01
textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what
Digital signal processing with kernel methods
Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo
2018-01-01
A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...
Parsimonious Wavelet Kernel Extreme Learning Machine
Directory of Open Access Journals (Sweden)
Wang Qin
2015-11-01
Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.
Ensemble Approach to Building Mercer Kernels
National Aeronautics and Space Administration — This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive...
Control Transfer in Operating System Kernels
1994-05-13
microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating
Uranium kernel formation via internal gelation
International Nuclear Information System (INIS)
Hunt, R.D.; Collins, J.L.
2004-01-01
In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)
Quantum tomography, phase-space observables and generalized Markov kernels
International Nuclear Information System (INIS)
Pellonpaeae, Juha-Pekka
2009-01-01
We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.
Sitompul, Monica Angelina
2015-01-01
Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...
A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.
Ho, Chi Ming
1995-01-01
Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth
3D variational brain tumor segmentation using Dirichlet priors on a clustered feature set.
Popuri, Karteek; Cobzas, Dana; Murtha, Albert; Jägersand, Martin
2012-07-01
Brain tumor segmentation is a required step before any radiation treatment or surgery. When performed manually, segmentation is time consuming and prone to human errors. Therefore, there have been significant efforts to automate the process. But, automatic tumor segmentation from MRI data is a particularly challenging task. Tumors have a large diversity in shape and appearance with intensities overlapping the normal brain tissues. In addition, an expanding tumor can also deflect and deform nearby tissue. In our work, we propose an automatic brain tumor segmentation method that addresses these last two difficult problems. We use the available MRI modalities (T1, T1c, T2) and their texture characteristics to construct a multidimensional feature set. Then, we extract clusters which provide a compact representation of the essential information in these features. The main idea in this work is to incorporate these clustered features into the 3D variational segmentation framework. In contrast to previous variational approaches, we propose a segmentation method that evolves the contour in a supervised fashion. The segmentation boundary is driven by the learned region statistics in the cluster space. We incorporate prior knowledge about the normal brain tissue appearance during the estimation of these region statistics. In particular, we use a Dirichlet prior that discourages the clusters from the normal brain region to be in the tumor region. This leads to a better disambiguation of the tumor from brain tissue. We evaluated the performance of our automatic segmentation method on 15 real MRI scans of brain tumor patients, with tumors that are inhomogeneous in appearance, small in size and in proximity to the major structures in the brain. Validation with the expert segmentation labels yielded encouraging results: Jaccard (58%), Precision (81%), Recall (67%), Hausdorff distance (24 mm). Using priors on the brain/tumor appearance, our proposed automatic 3D variational
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Directory of Open Access Journals (Sweden)
Tadahiro Taniguchi
2018-05-01
Full Text Available In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP. The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes
Solution of the two- dimensional heat equation for a rectangular plate
Directory of Open Access Journals (Sweden)
Nurcan BAYKUŞ SAVAŞANERİL
2015-11-01
Full Text Available Laplace equation is a fundamental equation of applied mathematics. Important phenomena in engineering and physics, such as steady-state temperature distribution, electrostatic potential and fluid flow, are modeled by means of this equation. The Laplace equation which satisfies boundary values is known as the Dirichlet problem. The solutions to the Dirichlet problem form one of the most celebrated topics in the area of applied mathematics. In this study, a novel method is presented for the solution of two-dimensional heat equation for a rectangular plate. In this alternative method, the solution function of the problem is based on the Green function, and therefore on elliptic functions.
Selection and properties of alternative forming fluids for TRISO fuel kernel production
Energy Technology Data Exchange (ETDEWEB)
Baker, M.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Marshall, D.W. [Idaho National Laboratory, 2525 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83415 (United States)
2013-01-15
Highlights: Black-Right-Pointing-Pointer Forming fluid selection criteria developed for TRISO kernel production. Black-Right-Pointing-Pointer Ten candidates selected for further study. Black-Right-Pointing-Pointer Density, viscosity, and surface tension measured for first time. Black-Right-Pointing-Pointer Settling velocity and heat transfer rates calculated. Black-Right-Pointing-Pointer Three fluids recommended for kernel production testing. - Abstract: Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of {approx}10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 Degree-Sign C and 80 Degree-Sign C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory
Ngono Mbarga, M. C.; Bup Nde, D.; Mohagir, A.; Kapseu, C.; Elambo Nkeng, G.
2017-01-01
A neem tree growing abundantly in India as well as in some regions of Asia and Africa gives fruits whose kernels have about 40-50% oil. This oil has high therapeutic and cosmetic qualities and is recently projected to be an important raw material for the production of biodiesel. Its seed is harvested at high moisture contents, which leads tohigh post-harvest losses. In the paper, the sorption isotherms are determined by the static gravimetric method at 40, 50, and 60°C to establish a database useful in defining drying and storage conditions of neem kernels. Five different equations are validated for modeling the sorption isotherms of neem kernels. The properties of sorbed water, such as the monolayer moisture content, surface area of adsorbent, number of adsorbed monolayers, and the percent of bound water are also defined. The critical moisture content necessary for the safe storage of dried neem kernels is shown to range from 5 to 10% dry basis, which can be obtained at a relative humidity less than 65%. The isosteric heats of sorption at 5% moisture content are 7.40 and 22.5 kJ/kg for the adsorption and desorption processes, respectively. This work is the first, to the best of our knowledge, to give the important parameters necessary for drying and storage of neem kernels, a potential raw material for the production of oil to be used in pharmaceutics, cosmetics, and biodiesel manufacturing.
Aflatoxin contamination of developing corn kernels.
Amer, M A
2005-01-01
Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.
Analog forecasting with dynamics-adapted kernels
Zhao, Zhizhen; Giannakis, Dimitrios
2016-09-01
Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.
Reimer, Ashton S.; Cheviakov, Alexei F.
2013-03-01
A Matlab-based finite-difference numerical solver for the Poisson equation for a rectangle and a disk in two dimensions, and a spherical domain in three dimensions, is presented. The solver is optimized for handling an arbitrary combination of Dirichlet and Neumann boundary conditions, and allows for full user control of mesh refinement. The solver routines utilize effective and parallelized sparse vector and matrix operations. Computations exhibit high speeds, numerical stability with respect to mesh size and mesh refinement, and acceptable error values even on desktop computers. Catalogue identifier: AENQ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENQ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License v3.0 No. of lines in distributed program, including test data, etc.: 102793 No. of bytes in distributed program, including test data, etc.: 369378 Distribution format: tar.gz Programming language: Matlab 2010a. Computer: PC, Macintosh. Operating system: Windows, OSX, Linux. RAM: 8 GB (8, 589, 934, 592 bytes) Classification: 4.3. Nature of problem: To solve the Poisson problem in a standard domain with “patchy surface”-type (strongly heterogeneous) Neumann/Dirichlet boundary conditions. Solution method: Finite difference with mesh refinement. Restrictions: Spherical domain in 3D; rectangular domain or a disk in 2D. Unusual features: Choice between mldivide/iterative solver for the solution of large system of linear algebraic equations that arise. Full user control of Neumann/Dirichlet boundary conditions and mesh refinement. Running time: Depending on the number of points taken and the geometry of the domain, the routine may take from less than a second to several hours to execute.
OS X and iOS Kernel Programming
Halvorsen, Ole Henry
2011-01-01
OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i
The Classification of Diabetes Mellitus Using Kernel k-means
Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.
2018-01-01
Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.
Object classification and detection with context kernel descriptors
DEFF Research Database (Denmark)
Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping
2014-01-01
Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...
Non-separable pairing interaction kernels applied to superconducting cuprates
International Nuclear Information System (INIS)
Haley, Stephen B.; Fink, Herman J.
2014-01-01
Highlights: • Non-separable interaction kernels with weak interactions produces HTS. • A probabilistic approach is used in filling the electronic states in the unit cell. • A set of coupled equations is derived which describes the energy gap. • SC properties of separable with non-separable interactions are compared. • There is agreement with measured properties of the SC and normal states. - Abstract: A pairing Hamiltonian H(Γ) with a non-separable interaction kernel Γ produces HTS for relatively weak interactions. The doping and temperature dependence of Γ(x,T) and the chemical potential μ(x) is determined by a probabilistic filling of the electronic states in the cuprate unit cell. A diverse set of HTS and normal state properties is examined, including the SC phase transition boundary T C (x), SC gap Δ(x,T), entropy S(x,T), specific heat C(x,T), and spin susceptibility χ s (x,T). Detailed x,T agreement with cuprate experiment is obtained for all properties
Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu
2017-12-15
Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.
Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates
International Nuclear Information System (INIS)
Hanft, J.M.; Jones, R.J.
1986-01-01
This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose
Fluidization calculation on nuclear fuel kernel coating
International Nuclear Information System (INIS)
Sukarsono; Wardaya; Indra-Suryawan
1996-01-01
The fluidization of nuclear fuel kernel coating was calculated. The bottom of the reactor was in the from of cone on top of the cone there was a cylinder, the diameter of the cylinder for fluidization was 2 cm and at the upper part of the cylinder was 3 cm. Fluidization took place in the cone and the first cylinder. The maximum and the minimum velocity of the gas of varied kernel diameter, the porosity and bed height of varied stream gas velocity were calculated. The calculation was done by basic program
Reduced multiple empirical kernel learning machine.
Wang, Zhe; Lu, MingZhe; Gao, Daqi
2015-02-01
Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3
Comparative Analysis of Kernel Methods for Statistical Shape Learning
National Research Council Canada - National Science Library
Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen
2006-01-01
.... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...
Variable kernel density estimation in high-dimensional feature spaces
CSIR Research Space (South Africa)
Van der Walt, Christiaan M
2017-02-01
Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...
Influence of differently processed mango seed kernel meal on ...
African Journals Online (AJOL)
Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.
On methods to increase the security of the Linux kernel
International Nuclear Information System (INIS)
Matvejchikov, I.V.
2014-01-01
Methods to increase the security of the Linux kernel for the implementation of imposed protection tools have been examined. The methods of incorporation into various subsystems of the kernel on the x86 architecture have been described [ru
Linear and kernel methods for multi- and hypervariate change detection
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Canty, Morton J.
2010-01-01
. Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...
Kernel methods in orthogonalization of multi- and hypervariate data
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2009-01-01
A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...
Del Pozzo, W.; Berry, C. P. L.; Ghosh, A.; Haines, T. S. F.; Singer, L. P.; Vecchio, A.
2018-06-01
We reconstruct posterior distributions for the position (sky area and distance) of a simulated set of binary neutron-star gravitational-waves signals observed with Advanced LIGO and Advanced Virgo. We use a Dirichlet Process Gaussian-mixture model, a fully Bayesian non-parametric method that can be used to estimate probability density functions with a flexible set of assumptions. The ability to reliably reconstruct the source position is important for multimessenger astronomy, as recently demonstrated with GW170817. We show that for detector networks comparable to the early operation of Advanced LIGO and Advanced Virgo, typical localization volumes are ˜104-105 Mpc3 corresponding to ˜102-103 potential host galaxies. The localization volume is a strong function of the network signal-to-noise ratio, scaling roughly ∝ϱnet-6. Fractional localizations improve with the addition of further detectors to the network. Our Dirichlet Process Gaussian-mixture model can be adopted for localizing events detected during future gravitational-wave observing runs, and used to facilitate prompt multimessenger follow-up.
Heat kernel for Newton-Cartan trace anomalies
Energy Technology Data Exchange (ETDEWEB)
Auzzi, Roberto [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore, Via Musei 41, Brescia, 25121 (Italy); INFN Sezione di Perugia, Via A. Pascoli, Perugia, 06123 (Italy); Nardelli, Giuseppe [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore, Via Musei 41, Brescia, 25121 (Italy); TIFPA - INFN, Università di Trento,c/o Dipartimento di Fisica, Povo, TN, 38123 (Italy)
2016-07-11
We compute the leading part of the trace anomaly for a free non-relativistic scalar in 2+1 dimensions coupled to a background Newton-Cartan metric. The anomaly is proportional to 1/m, where m is the mass of the scalar. We comment on the implications of a conjectured a-theorem for non-relativistic theories with boost invariance.
Generalized heat kernel coefficients for a new asymptotic expansion
International Nuclear Information System (INIS)
Osipov, Alexander A.; Hiller, Brigitte
2003-01-01
The method which allows for asymptotic expansion of the one-loop effective action W = lndetA is formulated. The positively defined elliptic operator A = U + M2 depends on the external classical fields taking values in the Lie algebra of the internal symmetry group G. Unlike the standard method of Schwinger - DeWitt, the more general case with the nongenerate mass matrix M = diag(m1, m2, ...) is considered. The first coefficients of the new asymptotic series are calculated and their relationship with the Seeley - DeWitt coefficients is clarified
One Point Isometric Matching with the Heat Kernel
Ovsjanikov, Maks
2010-09-21
A common operation in many geometry processing algorithms consists of finding correspondences between pairs of shapes by finding structure-preserving maps between them. A particularly useful case of such maps is isometries, which preserve geodesic distances between points on each shape. Although several algorithms have been proposed to find approximately isometric maps between a pair of shapes, the structure of the space of isometries is not well understood. In this paper, we show that under mild genericity conditions, a single correspondence can be used to recover an isometry defined on entire shapes, and thus the space of all isometries can be parameterized by one correspondence between a pair of points. Perhaps surprisingly, this result is general, and does not depend on the dimensionality or the genus, and is valid for compact manifolds in any dimension. Moreover, we show that both the initial correspondence and the isometry can be recovered efficiently in practice. This allows us to devise an algorithm to find intrinsic symmetries of shapes, match shapes undergoing isometric deformations, as well as match partial and incomplete models efficiently. Journal compilation © 2010 The Eurographics Association and Blackwell Publishing Ltd.
Mitigation of artifacts in rtm with migration kernel decomposition
Zhan, Ge
2012-01-01
The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.
Sparse Event Modeling with Hierarchical Bayesian Kernel Methods
2016-01-05
SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function
Relationship between attenuation coefficients and dose-spread kernels
International Nuclear Information System (INIS)
Boyer, A.L.
1988-01-01
Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods
Fabrication of Uranium Oxycarbide Kernels for HTR Fuel
International Nuclear Information System (INIS)
Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric
2010-01-01
Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.
Consistent Estimation of Pricing Kernels from Noisy Price Data
Vladislav Kargin
2003-01-01
If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.
Introduction to heat potential theory
Watson, Neil A
2012-01-01
This book is the first to be devoted entirely to the potential theory of the heat equation, and thus deals with time dependent potential theory. Its purpose is to give a logical, mathematically precise introduction to a subject where previously many proofs were not written in detail, due to their similarity with those of the potential theory of Laplace's equation. The approach to subtemperatures is a recent one, based on the Poisson integral representation of temperatures on a circular cylinder. Characterizations of subtemperatures in terms of heat balls and modified heat balls are proved, and thermal capacity is studied in detail. The generalized Dirichlet problem on arbitrary open sets is given a treatment that reflects its distinctive nature for an equation of parabolic type. Also included is some new material on caloric measure for arbitrary open sets. Each chapter concludes with bibliographical notes and open questions. The reader should have a good background in the calculus of functions of several vari...
Quantum logic in dagger kernel categories
Heunen, C.; Jacobs, B.P.F.
2009-01-01
This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial
Quantum logic in dagger kernel categories
Heunen, C.; Jacobs, B.P.F.; Coecke, B.; Panangaden, P.; Selinger, P.
2011-01-01
This paper investigates quantum logic from the perspective of categorical logic, and starts from minimal assumptions, namely the existence of involutions/daggers and kernels. The resulting structures turn out to (1) encompass many examples of interest, such as categories of relations, partial
Symbol recognition with kernel density matching.
Zhang, Wan; Wenyin, Liu; Zhang, Kun
2006-12-01
We propose a novel approach to similarity assessment for graphic symbols. Symbols are represented as 2D kernel densities and their similarity is measured by the Kullback-Leibler divergence. Symbol orientation is found by gradient-based angle searching or independent component analysis. Experimental results show the outstanding performance of this approach in various situations.
Flexible Scheduling in Multimedia Kernels: An Overview
Jansen, P.G.; Scholten, Johan; Laan, Rene; Chow, W.S.
1999-01-01
Current Hard Real-Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes current HRT scheduling techniques inadequate for use in a multimedia environment where we can make a considerable profit by a better and more
Reproducing kernel Hilbert spaces of Gaussian priors
Vaart, van der A.W.; Zanten, van J.H.; Clarke, B.; Ghosal, S.
2008-01-01
We review definitions and properties of reproducing kernel Hilbert spaces attached to Gaussian variables and processes, with a view to applications in nonparametric Bayesian statistics using Gaussian priors. The rate of contraction of posterior distributions based on Gaussian priors can be described
A synthesis of empirical plant dispersal kernels
Czech Academy of Sciences Publication Activity Database
Bullock, J. M.; González, L. M.; Tamme, R.; Götzenberger, Lars; White, S. M.; Pärtel, M.; Hooftman, D. A. P.
2017-01-01
Roč. 105, č. 1 (2017), s. 6-19 ISSN 0022-0477 Institutional support: RVO:67985939 Keywords : dispersal kernel * dispersal mode * probability density function Subject RIV: EH - Ecology, Behaviour OBOR OECD: Ecology Impact factor: 5.813, year: 2016
Analytic continuation of weighted Bergman kernels
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2010-01-01
Roč. 94, č. 6 (2010), s. 622-650 ISSN 0021-7824 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * analytic continuation * Toeplitz operator Subject RIV: BA - General Mathematics Impact factor: 1.450, year: 2010 http://www.sciencedirect.com/science/article/pii/S0021782410000942
On convergence of kernel learning estimators
Norkin, V.I.; Keyzer, M.A.
2009-01-01
The paper studies convex stochastic optimization problems in a reproducing kernel Hilbert space (RKHS). The objective (risk) functional depends on functions from this RKHS and takes the form of a mathematical expectation (integral) of a nonnegative integrand (loss function) over a probability
Analytic properties of the Virasoro modular kernel
Energy Technology Data Exchange (ETDEWEB)
Nemkov, Nikita [Moscow Institute of Physics and Technology (MIPT), Dolgoprudny (Russian Federation); Institute for Theoretical and Experimental Physics (ITEP), Moscow (Russian Federation); National University of Science and Technology MISIS, The Laboratory of Superconducting metamaterials, Moscow (Russian Federation)
2017-06-15
On the space of generic conformal blocks the modular transformation of the underlying surface is realized as a linear integral transformation. We show that the analytic properties of conformal block implied by Zamolodchikov's formula are shared by the kernel of the modular transformation and illustrate this by explicit computation in the case of the one-point toric conformal block. (orig.)
Kernel based subspace projection of hyperspectral images
DEFF Research Database (Denmark)
Larsen, Rasmus; Nielsen, Allan Aasbjerg; Arngren, Morten
In hyperspectral image analysis an exploratory approach to analyse the image data is to conduct subspace projections. As linear projections often fail to capture the underlying structure of the data, we present kernel based subspace projections of PCA and Maximum Autocorrelation Factors (MAF...
Kernel Temporal Differences for Neural Decoding
Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.
2015-01-01
We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504
Scattering kernels and cross sections working group
International Nuclear Information System (INIS)
Russell, G.; MacFarlane, B.; Brun, T.
1998-01-01
Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics
Enhanced gluten properties in soft kernel durum wheat
Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...
Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...
African Journals Online (AJOL)
Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...
Stable Kernel Representations as Nonlinear Left Coprime Factorizations
Paice, A.D.B.; Schaft, A.J. van der
1994-01-01
A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel
7 CFR 981.60 - Determination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...
21 CFR 176.350 - Tamarind seed kernel powder.
2010-04-01
... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...
End-use quality of soft kernel durum wheat
Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...
A Fast and Simple Graph Kernel for RDF
de Vries, G.K.D.; de Rooij, S.
2013-01-01
In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster
7 CFR 981.61 - Redetermination of kernel weight.
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...
Single pass kernel k-means clustering method
Indian Academy of Sciences (India)
paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.
Scuba: scalable kernel-based gene prioritization.
Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio
2018-01-25
The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .
Directory of Open Access Journals (Sweden)
Maryam Asnaashari
2015-01-01
Full Text Available In this study, in order to introduce natural antioxidative vegetable oil in food industry, the kolkhoung hull oil and kernel oil were extracted. To evaluate their antioxidant efficiency, gas chromatography analysis of the composition of kolkhoung hull and kernel oil fatty acids and high–performance liquid chromatography analysis of tocopherols were done. Also, the oxidative stability of the oil was considered based on the peroxide value and anisidine value during heating at 100, 110 and 120 °C. Gas chromatography analysis showed that oleic acid was the major fatty acid of both types of oil (hull and kernel and based on a low content of saturated fatty acids, high content of monounsaturated fatty acids, and the ratio of ω-6 and ω-3 polyunsaturated fatty acids, they were nutritionally well-balanced. Moreover, both hull and kernel oil showed high oxidative stability during heating, which can be attributed to high content of tocotrienols. Based on the results, kolkhoung hull oil acted slightly better than its kernel oil. However, both of them can be added to oxidation–sensitive oils to improve their shelf life.
Kernel based orthogonalization for change detection in hyperspectral images
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...
A laser optical method for detecting corn kernel defects
Energy Technology Data Exchange (ETDEWEB)
Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.
1984-01-01
An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws
Directory of Open Access Journals (Sweden)
Mohammed D. ABDULMALIK
2008-06-01
Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.
Difference between standard and quasi-conformal BFKL kernels
International Nuclear Information System (INIS)
Fadin, V.S.; Fiore, R.; Papa, A.
2012-01-01
As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.
UO{sub 2} Kernel Preparation by M-EG Process and Its Irradiation Test
Energy Technology Data Exchange (ETDEWEB)
Jeong, K. C.; Eom, S. H.; Kim, Y. K.; Yeo, S. H.; Kim, Y. M.; Kim, B. G.; Cho, M. S. [KAERI, Daejeon (Korea, Republic of)
2016-05-15
Kernels of KAERI TRISO fuels are prepared in the following steps: (1) preparation of a raw material solution(UN solution) by UO{sub 3} (or U{sub 3}O{sub 8}) powder dissolution in the concentrated HNO{sub 3}; (2) broth preparation and physical property control by mixing UN, THFA, PVA, and H{sub 2}O; (3) preparation of spherical liquid gel droplets and dried-ADU gels in sequence through a reaction between uranyl ions and ammonia ions in a gelation column; (4) ageing, washing, and drying processes of ADU gel using AWD equipment; (5) UO{sub 3} calcination by thermal decomposition of driedADU gel in the air; (6) fabrication of UO{sub 2} kernel by reducing the UO{sub 3} and sintering in the H{sub 2}. In this study, improved KAERI processes for UO{sub 2} kernel preparation were presented. ADU gel washing procedure in AWD processes and the heating mode in sintering process were modified and the internal structures of UO{sub 2} kernels are presented as a result.
Gesztesy, Fritz; Mitrea, Marius
2008-01-01
We study generalized Robin boundary conditions, Robin-to-Dirichlet maps, and Krein-type resolvent formulas for Schr\\"odinger operators on bounded Lipschitz domains in $\\bbR^n$, $n\\ge 2$. We also discuss the case of bounded $C^{1,r}$-domains, $(1/2)
Directory of Open Access Journals (Sweden)
Luisa Toscano
2016-01-01
Full Text Available A new result of solvability for a wide class of systems of variational equations depending on parameters and governed by nonmonotone operators is found in a Banach real and reflexive space with applications to Dirichlet and Neumann problems related to nonlinear elliptic systems.
International Nuclear Information System (INIS)
Lu Junguo; Lu Linji
2009-01-01
In this paper, global exponential stability and periodicity of a class of reaction-diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions are studied by constructing suitable Lyapunov functionals and utilizing some inequality techniques. We first prove global exponential convergence to 0 of the difference between any two solutions of the original neural networks, the existence and uniqueness of equilibrium is the direct results of this procedure. This approach is different from the usually used one where the existence, uniqueness of equilibrium and stability are proved in two separate steps. Secondly, we prove periodicity. Sufficient conditions ensuring the existence, uniqueness, and global exponential stability of the equilibrium and periodic solution are given. These conditions are easy to verify and our results play an important role in the design and application of globally exponentially stable neural circuits and periodic oscillatory neural circuits.
Bouleau, Nicolas
2015-01-01
A simplified approach to Malliavin calculus adapted to Poisson random measures is developed and applied in this book. Called the “lent particle method” it is based on perturbation of the position of particles. Poisson random measures describe phenomena involving random jumps (for instance in mathematical finance) or the random distribution of particles (as in statistical physics). Thanks to the theory of Dirichlet forms, the authors develop a mathematical tool for a quite general class of random Poisson measures and significantly simplify computations of Malliavin matrices of Poisson functionals. The method gives rise to a new explicit calculus that they illustrate on various examples: it consists in adding a particle and then removing it after computing the gradient. Using this method, one can establish absolute continuity of Poisson functionals such as Lévy areas, solutions of SDEs driven by Poisson measure and, by iteration, obtain regularity of laws. The authors also give applications to error calcul...
Analytic scattering kernels for neutron thermalization studies
International Nuclear Information System (INIS)
Sears, V.F.
1990-01-01
Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results
Quantized kernel least mean square algorithm.
Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C
2012-01-01
In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.
Kernel-based tests for joint independence
DEFF Research Database (Denmark)
Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard
2018-01-01
if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...
Wilson Dslash Kernel From Lattice QCD Optimization
Energy Technology Data Exchange (ETDEWEB)
Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India
2015-07-01
Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.
Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas
2018-05-01
In recent years, proper orthogonal decomposition (POD) has become a popular model reduction method in the field of groundwater modeling. It is used to mitigate the problem of long run times that are often associated with physically-based modeling of natural systems, especially for parameter estimation and uncertainty analysis. POD-based techniques reproduce groundwater head fields sufficiently accurate for a variety of applications. However, no study has investigated how POD techniques affect the accuracy of different boundary conditions found in groundwater models. We show that the current treatment of boundary conditions in POD causes inaccuracies for these boundaries in the reduced models. We provide an improved method that splits the POD projection space into a subspace orthogonal to the boundary conditions and a separate subspace that enforces the boundary conditions. To test the method for Dirichlet, Neumann and Cauchy boundary conditions, four simple transient 1D-groundwater models, as well as a more complex 3D model, are set up and reduced both by standard POD and POD with the new extension. We show that, in contrast to standard POD, the new method satisfies both Dirichlet and Neumann boundary conditions. It can also be applied to Cauchy boundaries, where the flux error of standard POD is reduced by its head-independent contribution. The extension essentially shifts the focus of the projection towards the boundary conditions. Therefore, we see a slight trade-off between errors at model boundaries and overall accuracy of the reduced model. The proposed POD extension is recommended where exact treatment of boundary conditions is required.
A Kernel for Protein Secondary Structure Prediction
Guermeur , Yann; Lifchitz , Alain; Vert , Régis
2004-01-01
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=10338&mode=toc; International audience; Multi-class support vector machines have already proved efficient in protein secondary structure prediction as ensemble methods, to combine the outputs of sets of classifiers based on different principles. In this chapter, their implementation as basic prediction methods, processing the primary structure or the profile of multiple alignments, is investigated. A kernel devoted to the task is in...
Scalar contribution to the BFKL kernel
International Nuclear Information System (INIS)
Gerasimov, R. E.; Fadin, V. S.
2010-01-01
The contribution of scalar particles to the kernel of the Balitsky-Fadin-Kuraev-Lipatov (BFKL) equation is calculated. A great cancellation between the virtual and real parts of this contribution, analogous to the cancellation in the quark contribution in QCD, is observed. The reason of this cancellation is discovered. This reason has a common nature for particles with any spin. Understanding of this reason permits to obtain the total contribution without the complicated calculations, which are necessary for finding separate pieces.
Weighted Bergman Kernels for Logarithmic Weights
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2010-01-01
Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/
Exploiting graph kernels for high performance biomedical relation extraction.
Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri
2018-01-30
Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM
Identification of Fusarium damaged wheat kernels using image analysis
Directory of Open Access Journals (Sweden)
Ondřej Jirsa
2011-01-01
Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.
Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.
Kwak, Nojun
2016-05-20
Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.
Kernel based subspace projection of near infrared hyperspectral images of maize kernels
DEFF Research Database (Denmark)
Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben
2009-01-01
In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...
International Nuclear Information System (INIS)
Damunir; Sukarsono
2007-01-01
The influent of carbonization of sol solution at the external gelation process on the quality of uranium oxide kernel was done. Variables observed are the influent of carbon, temperature and time of reduction process of U 3 O 8 kernel resulted from carbonization of sol solution. First of all, uranyl nitrate was reacted with 1 M NH 4 OH solution, producing the colloid of UO 3 . Then by mixing and heating up to the temperature of 60-80 °C, the colloid solution was reacted with PVA, mono sorbitol oleate and paraffin producing of uranium-PVA sol. Then sol solution was carbonized with carbon black of mol ratio of carbon to uranium =2.32-6.62, produce of carbide gel. Gel then washed, dried and calcined at 800 °C for 4 hours to produce of U 3 O 8 kernel containing carbon. Then the kernel was reduced by H 2 gas in the medium of N 2 gas at 500-800 °C, 50 mmHg pressure for 3 hours. The process was repeated at 700 °C, 50 mmHg pressure for 1-4 hours. The characterization of chemical properties of the gel grains and uranium oxide kernel using FTIR covering the analysis of absorption band of infra red spectrum of UO 3 , C-OH, NH 3 , C-C, C-H and OH functional group. The physical properties of uranium oxide covering specific surface area, void volume, mean diameter using surface area meter Nova-1000 and as N 2 gas an absorbent. And O/U ratio of uranium dioxide kernel by gravimetry method. The result of experiment showed that carbonization of sol solution at the external gelation process give influencing the quality of uranium oxide kernel. (author)
CHARACTERIZATION OF BIO-OIL FROM PALM KERNEL SHELL PYROLYSIS
Directory of Open Access Journals (Sweden)
R. Ahmad
2014-12-01
Full Text Available Pyrolysis of palm kernel shell in a fixed-bed reactor was studied in this paper. The objectives were to investigate the effect of pyrolysis temperature and particle size on the products yield and to characterize the bio-oil product. In order to get the optimum pyrolysis parameters on bio-oil yield, temperatures of 350, 400, 450, 500 and 550 °C and particle sizes of 212–300 µm, 300–600 µm, 600µm–1.18 mm and 1.18–2.36 mm under a heating rate of 50 °C min-1 were investigated. The maximum bio-oil yield was 38.40% at 450 °C with a heating rate of 50 °C min-1 and a nitrogen sweep gas flow rate of 50 ml min-1. The bio-oil products were analysed by Fourier transform infra-red spectroscopy (FTIR and gas chromatography–mass spectroscopy (GCMS. The FTIR analysis showed that the bio-oil was dominated by oxygenated species. The phenol, phenol, 2-methoxy- and furfural that were identified by GCMS analysis are highly suitable for extraction from the bio-oil as value-added chemicals. The highly oxygenated oils need to be upgraded in order to be used in other applications such as transportation fuels.
Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila
2018-05-07
Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.
Kernel based eigenvalue-decomposition methods for analysing ham
DEFF Research Database (Denmark)
Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming
2010-01-01
methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...
Classification of maize kernels using NIR hyperspectral imaging
DEFF Research Database (Denmark)
Williams, Paul; Kucheryavskiy, Sergey V.
2016-01-01
NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....
Ideal gas scattering kernel for energy dependent cross-sections
International Nuclear Information System (INIS)
Rothenstein, W.; Dagan, R.
1998-01-01
A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations
Bazargan, Alireza; Rough, Sarah L; McKay, Gordon
2018-04-01
Palm kernel shell biochars (PKSB) ejected as residues from a gasifier have been used for solid fuel briquette production. With this approach, palm kernel shells can be used for energy production twice: first, by producing rich syngas during gasification; second, by compacting the leftover residues from gasification into high calorific value briquettes. Herein, the process parameters for the manufacture of PKSB biomass briquettes via compaction are optimized. Two possible optimum process scenarios are considered. In the first, the compaction speed is increased from 0.5 to 10 mm/s, the compaction pressure is decreased from 80 Pa to 40 MPa, the retention time is reduced from 10 s to zero, and the starch binder content of the briquette is halved from 0.1 to 0.05 kg/kg. With these adjustments, the briquette production rate increases by more than 20-fold; hence capital and operational costs can be reduced and the service life of compaction equipment can be increased. The resulting product satisfactorily passes tensile (compressive) crushing strength and impact resistance tests. The second scenario involves reducing the starch weight content to 0.03 kg/kg, while reducing the compaction pressure to a value no lower than 60 MPa. Overall, in both cases, the PKSB biomass briquettes show excellent potential as a solid fuel with calorific values on par with good-quality coal. CHNS: carbon, hydrogen, nitrogen, sulfur; FFB: fresh fruit bunch(es); HHV: higher heating value [J/kg]; LHV: lower heating value [J/kg]; PKS: palm kernel shell(s); PKSB: palm kernel shell biochar(s); POME: palm oil mill effluent; RDF: refuse-derived fuel; TGA: thermogravimetric analysis.
Deep-Burn MHR Neutronic Analysis with a SiC-Gettered TRU Kernel
International Nuclear Information System (INIS)
Jo, Chang Keun; Noh, Jae Man; Kim, Yong Hee; Venneric, F.
2010-01-01
This paper is focused on the nuclear core design of a DB-MHR (Deep Burn-Modular Helium Reactor) core loaded with a SiC-gettered TRU fuel. The SiC oxygen getter is added to reduce the CO pressure in the buffer zone of TRISO. In the paper, the cycle length, reactivity swing, discharged burnup, and the burning rate of plutonium were calculated for the DB-MHR. Also, impacts of uranium addition to the TRU kernel were investigated. Recently, the decay heat of TRU fueled DB core was found to be highly dependent on the TRU loading: the higher the loading, the higher the decay heat. The high decay heat of TRU fuel may lead to unacceptably high peak fuel temperature during an LPCC (Low Pressure Conduction Cooling) accident. Thus, we tried to minimize the decay heat of the core for a minimal peak fuel temperature during LPCC
Embedded real-time operating system micro kernel design
Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng
2005-12-01
Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.
An SVM model with hybrid kernels for hydrological time series
Wang, C.; Wang, H.; Zhao, X.; Xie, Q.
2017-12-01
Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.
Influence of wheat kernel physical properties on the pulverizing process.
Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula
2014-10-01
The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.
Dose point kernels for beta-emitting radioisotopes
International Nuclear Information System (INIS)
Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.
1986-01-01
Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables
Hadamard Kernel SVM with applications for breast cancer outcome predictions.
Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong
2017-12-21
Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.
Parameter optimization in the regularized kernel minimum noise fraction transformation
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2012-01-01
Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....
Learning Rotation for Kernel Correlation Filter
Hamdi, Abdullah
2017-08-11
Kernel Correlation Filters have shown a very promising scheme for visual tracking in terms of speed and accuracy on several benchmarks. However it suffers from problems that affect its performance like occlusion, rotation and scale change. This paper tries to tackle the problem of rotation by reformulating the optimization problem for learning the correlation filter. This modification (RKCF) includes learning rotation filter that utilizes circulant structure of HOG feature to guesstimate rotation from one frame to another and enhance the detection of KCF. Hence it gains boost in overall accuracy in many of OBT50 detest videos with minimal additional computation.
Research of Performance Linux Kernel File Systems
Directory of Open Access Journals (Sweden)
Andrey Vladimirovich Ostroukh
2015-10-01
Full Text Available The article describes the most common Linux Kernel File Systems. The research was carried out on a personal computer, the characteristics of which are written in the article. The study was performed on a typical workstation running GNU/Linux with below characteristics. On a personal computer for measuring the file performance, has been installed the necessary software. Based on the results, conclusions and proposed recommendations for use of file systems. Identified and recommended by the best ways to store data.
Fixed kernel regression for voltammogram feature extraction
International Nuclear Information System (INIS)
Acevedo Rodriguez, F J; López-Sastre, R J; Gil-Jiménez, P; Maldonado Bascón, S; Ruiz-Reyes, N
2009-01-01
Cyclic voltammetry is an electroanalytical technique for obtaining information about substances under analysis without the need for complex flow systems. However, classifying the information in voltammograms obtained using this technique is difficult. In this paper, we propose the use of fixed kernel regression as a method for extracting features from these voltammograms, reducing the information to a few coefficients. The proposed approach has been applied to a wine classification problem with accuracy rates of over 98%. Although the method is described here for extracting voltammogram information, it can be used for other types of signals
Reciprocity relation for multichannel coupling kernels
International Nuclear Information System (INIS)
Cotanch, S.R.; Satchler, G.R.
1981-01-01
Assuming time-reversal invariance of the many-body Hamiltonian, it is proven that the kernels in a general coupled-channels formulation are symmetric, to within a specified spin-dependent phase, under the interchange of channel labels and coordinates. The theorem is valid for both Hermitian and suitably chosen non-Hermitian Hamiltonians which contain complex effective interactions. While of direct practical consequence for nuclear rearrangement reactions, the reciprocity relation is also appropriate for other areas of physics which involve coupled-channels analysis
Wheat kernel dimensions: how do they contribute to kernel weight at ...
Indian Academy of Sciences (India)
2011-12-02
Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.
DEFF Research Database (Denmark)
Arenas-Garcia, J.; Petersen, K.; Camps-Valls, G.
2013-01-01
correlation analysis (CCA), and orthonormalized PLS (OPLS), as well as their nonlinear extensions derived by means of the theory of reproducing kernel Hilbert spaces (RKHSs). We also review their connections to other methods for classification and statistical dependence estimation and introduce some recent...
Design analysis and performance test of reduction furnace of kernel U3O8
International Nuclear Information System (INIS)
Moch Setyadji; Triyono; Dedy Husnurrofiq
2015-01-01
High Temperature Reactor (HTR) with coated particle fuel is a future reactor (generation IV) because it is not only having high efficiency but also release no fission product into the environment. It has a passive safety principles and negative reactivity. Coated particle fuel for high temperature reactor is made through Sol-Gel process followed by aging, drying, calcination, reduction, sintering and coating process. Research of design analysis of heating system, electrical system, and insulation systems continued construction and performance test of reduction furnace has been done. The aim of this research was to obtain a reduction furnace with adequate performance that can be used to reduce the kernel of U 3 O 8 into UO 2 . The results of the performance test of the reduction furnace showed that heating zones 1-3 can generate heat to a temperature of 900°C with operation time 144 minutes and heat rate of 5.979°C/min. The coefficient of convection in the outer wall of fireproof stone having 7 cm thick was around 30 W/m 2 C. UO 2 kernel resulting from the reduction process has a diameter of between 0.850 to 0.992 mm and qualify as feed sintering process. (author)
Kernel learning at the first level of inference.
Cawley, Gavin C; Talbot, Nicola L C
2014-05-01
Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.
The Kernel Estimation in Biosystems Engineering
Directory of Open Access Journals (Sweden)
Esperanza Ayuga Téllez
2008-04-01
Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.
Consistent Valuation across Curves Using Pricing Kernels
Directory of Open Access Journals (Sweden)
Andrea Macrina
2018-03-01
Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.
Aligning Biomolecular Networks Using Modular Graph Kernels
Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant
Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.
Pareto-path multitask multiple kernel learning.
Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C
2015-01-01
A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.
Formal truncations of connected kernel equations
International Nuclear Information System (INIS)
Dixon, R.M.
1977-01-01
The Connected Kernel Equations (CKE) of Alt, Grassberger and Sandhas (AGS); Kouri, Levin and Tobocman (KLT); and Bencze, Redish and Sloan (BRS) are compared against reaction theory criteria after formal channel space and/or operator truncations have been introduced. The Channel Coupling Class concept is used to study the structure of these CKE's. The related wave function formalism of Sandhas, of L'Huillier, Redish and Tandy and of Kouri, Krueger and Levin are also presented. New N-body connected kernel equations which are generalizations of the Lovelace three-body equations are derived. A method for systematically constructing fewer body models from the N-body BRS and generalized Lovelace (GL) equations is developed. The formally truncated AGS, BRS, KLT and GL equations are analyzed by employing the criteria of reciprocity and two-cluster unitarity. Reciprocity considerations suggest that formal truncations of BRS, KLT and GL equations can lead to reciprocity-violating results. This study suggests that atomic problems should employ three-cluster connected truncations and that the two-cluster connected truncations should be a useful starting point for nuclear systems
Scientific Computing Kernels on the Cell Processor
Energy Technology Data Exchange (ETDEWEB)
Williams, Samuel W.; Shalf, John; Oliker, Leonid; Kamil, Shoaib; Husbands, Parry; Yelick, Katherine
2007-04-04
The slowing pace of commodity microprocessor performance improvements combined with ever-increasing chip power demands has become of utmost concern to computational scientists. As a result, the high performance computing community is examining alternative architectures that address the limitations of modern cache-based designs. In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key scientific computing kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. The difficulty of programming Cell, which requires assembly level intrinsics for the best performance, makes this model useful as an initial step in algorithm design and evaluation. Next, we validate the accuracy of our model by comparing results against published hardware results, as well as our own implementations on a 3.2GHz Cell blade. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different mappings of the kernels and demonstrates a simple and effective programming model for Cell's unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.
Delimiting areas of endemism through kernel interpolation.
Oliveira, Ubirajara; Brescovit, Antonio D; Santos, Adalberto J
2015-01-01
We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE), based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Delimiting areas of endemism through kernel interpolation.
Directory of Open Access Journals (Sweden)
Ubirajara Oliveira
Full Text Available We propose a new approach for identification of areas of endemism, the Geographical Interpolation of Endemism (GIE, based on kernel spatial interpolation. This method differs from others in being independent of grid cells. This new approach is based on estimating the overlap between the distribution of species through a kernel interpolation of centroids of species distribution and areas of influence defined from the distance between the centroid and the farthest point of occurrence of each species. We used this method to delimit areas of endemism of spiders from Brazil. To assess the effectiveness of GIE, we analyzed the same data using Parsimony Analysis of Endemism and NDM and compared the areas identified through each method. The analyses using GIE identified 101 areas of endemism of spiders in Brazil GIE demonstrated to be effective in identifying areas of endemism in multiple scales, with fuzzy edges and supported by more synendemic species than in the other methods. The areas of endemism identified with GIE were generally congruent with those identified for other taxonomic groups, suggesting that common processes can be responsible for the origin and maintenance of these biogeographic units.
Sedghi, Aliasghar; Rezaei, Behrooz
2016-11-20
Using the Dirichlet-to-Neumann map method, we have calculated the photonic band structure of two-dimensional metallodielectric photonic crystals having the square and triangular lattices of circular metal rods in a dielectric background. We have selected the transverse electric mode of electromagnetic waves, and the resulting band structures showed the existence of photonic bandgap in these structures. We theoretically study the effect of background dielectric on the photonic bandgap.
Czech Academy of Sciences Publication Activity Database
Hakl, Robert; Zamora, M.
2013-01-01
Roč. 20, č. 3 (2013), s. 469-491 ISSN 1072-947X Institutional support: RVO:67985840 Keywords : second-order singular equation * Dirichlet problem * solvability Subject RIV: BA - General Mathematics Impact factor: 0.340, year: 2013 http://www.degruyter.com/view/j/gmj.2013.20.issue-3/gmj-2013-0030/gmj-2013-0030. xml ?format=INT
Extracting Feature Model Changes from the Linux Kernel Using FMDiff
Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.
2014-01-01
The Linux kernel feature model has been studied as an example of large scale evolving feature model and yet details of its evolution are not known. We present here a classification of feature changes occurring on the Linux kernel feature model, as well as a tool, FMDiff, designed to automatically
Replacement Value of Palm Kernel Meal for Maize on Carcass ...
African Journals Online (AJOL)
This study was conducted to evaluate the effect of replacing maize with palm kernel meal on nutrient composition, fatty acid profile and sensory qualities of the meat of turkeys fed the dietary treatments. Six dietary treatments were formulated using palm kernel meal to replace maize at 0, 20, 40, 60, 80 and 100 percent.
Effect of Palm Kernel Cake Replacement and Enzyme ...
African Journals Online (AJOL)
A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...
Capturing option anomalies with a variance-dependent pricing kernel
Christoffersen, P.; Heston, S.; Jacobs, K.
2013-01-01
We develop a GARCH option model with a variance premium by combining the Heston-Nandi (2000) dynamic with a new pricing kernel that nests Rubinstein (1976) and Brennan (1979). While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is
Nonlinear Forecasting With Many Predictors Using Kernel Ridge Regression
DEFF Research Database (Denmark)
Exterkate, Peter; Groenen, Patrick J.F.; Heij, Christiaan
This paper puts forward kernel ridge regression as an approach for forecasting with many predictors that are related nonlinearly to the target variable. In kernel ridge regression, the observed predictor variables are mapped nonlinearly into a high-dimensional space, where estimation of the predi...
Commutators of Integral Operators with Variable Kernels on Hardy ...
Indian Academy of Sciences (India)
Home; Journals; Proceedings – Mathematical Sciences; Volume 115; Issue 4. Commutators of Integral Operators with Variable Kernels on Hardy Spaces. Pu Zhang Kai Zhao. Volume 115 Issue 4 November 2005 pp 399-410 ... Keywords. Singular and fractional integrals; variable kernel; commutator; Hardy space.
Discrete non-parametric kernel estimation for global sensitivity analysis
International Nuclear Information System (INIS)
Senga Kiessé, Tristan; Ventura, Anne
2016-01-01
This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.
Kernel Function Tuning for Single-Layer Neural Networks
Czech Academy of Sciences Publication Activity Database
Vidnerová, Petra; Neruda, Roman
-, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/
Geodesic exponential kernels: When Curvature and Linearity Conflict
DEFF Research Database (Denmark)
Feragen, Aase; Lauze, François; Hauberg, Søren
2015-01-01
manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...
Denoising by semi-supervised kernel PCA preimaging
DEFF Research Database (Denmark)
Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai
2014-01-01
Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...
Design and construction of palm kernel cracking and separation ...
African Journals Online (AJOL)
Design and construction of palm kernel cracking and separation machines. ... Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Design and construction of palm kernel cracking and separation machines. JO Nordiana, K ...
Kernel Methods for Machine Learning with Life Science Applications
DEFF Research Database (Denmark)
Abrahamsen, Trine Julie
Kernel methods refer to a family of widely used nonlinear algorithms for machine learning tasks like classification, regression, and feature extraction. By exploiting the so-called kernel trick straightforward extensions of classical linear algorithms are enabled as long as the data only appear a...
Genetic relationship between plant growth, shoot and kernel sizes in ...
African Journals Online (AJOL)
Maize (Zea mays L.) ear vascular tissue transports nutrients that contribute to grain yield. To assess kernel heritabilities that govern ear development and plant growth, field studies were conducted to determine the combining abilities of parents that differed for kernel-size, grain-filling rates and shoot-size. Thirty two hybrids ...
A relationship between Gel'fand-Levitan and Marchenko kernels
International Nuclear Information System (INIS)
Kirst, T.; Von Geramb, H.V.; Amos, K.A.
1989-01-01
An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs
Boundary singularity of Poisson and harmonic Bergman kernels
Czech Academy of Sciences Publication Activity Database
Engliš, Miroslav
2015-01-01
Roč. 429, č. 1 (2015), s. 233-272 ISSN 0022-247X R&D Projects: GA AV ČR IAA100190802 Institutional support: RVO:67985840 Keywords : harmonic Bergman kernel * Poisson kernel * pseudodifferential boundary operators Subject RIV: BA - General Mathematics Impact factor: 1.014, year: 2015 http://www.sciencedirect.com/science/article/pii/S0022247X15003170
Oven-drying reduces ruminal starch degradation in maize kernels
Ali, M.; Cone, J.W.; Hendriks, W.H.; Struik, P.C.
2014-01-01
The degradation of starch largely determines the feeding value of maize (Zea mays L.) for dairy cows. Normally, maize kernels are dried and ground before chemical analysis and determining degradation characteristics, whereas cows eat and digest fresh material. Drying the moist maize kernels
Real time kernel performance monitoring with SystemTap
CERN. Geneva
2018-01-01
SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.
Resolvent kernel for the Kohn Laplacian on Heisenberg groups
Directory of Open Access Journals (Sweden)
Neur Eddine Askour
2002-07-01
Full Text Available We present a formula that relates the Kohn Laplacian on Heisenberg groups and the magnetic Laplacian. Then we obtain the resolvent kernel for the Kohn Laplacian and find its spectral density. We conclude by obtaining the Green kernel for fractional powers of the Kohn Laplacian.
Reproducing Kernels and Coherent States on Julia Sets
Energy Technology Data Exchange (ETDEWEB)
Thirulogasanthar, K., E-mail: santhar@cs.concordia.ca; Krzyzak, A. [Concordia University, Department of Computer Science and Software Engineering (Canada)], E-mail: krzyzak@cs.concordia.ca; Honnouvo, G. [Concordia University, Department of Mathematics and Statistics (Canada)], E-mail: g_honnouvo@yahoo.fr
2007-11-15
We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems.
Reproducing Kernels and Coherent States on Julia Sets
International Nuclear Information System (INIS)
Thirulogasanthar, K.; Krzyzak, A.; Honnouvo, G.
2007-01-01
We construct classes of coherent states on domains arising from dynamical systems. An orthonormal family of vectors associated to the generating transformation of a Julia set is found as a family of square integrable vectors, and, thereby, reproducing kernels and reproducing kernel Hilbert spaces are associated to Julia sets. We also present analogous results on domains arising from iterated function systems
A multi-scale kernel bundle for LDDMM
DEFF Research Database (Denmark)
Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard
2011-01-01
The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...
Comparison of Kernel Equating and Item Response Theory Equating Methods
Meng, Yu
2012-01-01
The kernel method of test equating is a unified approach to test equating with some advantages over traditional equating methods. Therefore, it is important to evaluate in a comprehensive way the usefulness and appropriateness of the Kernel equating (KE) method, as well as its advantages and disadvantages compared with several popular item…
An analysis of 1-D smoothed particle hydrodynamics kernels
International Nuclear Information System (INIS)
Fulk, D.A.; Quinn, D.W.
1996-01-01
In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs
Optimal Bandwidth Selection in Observed-Score Kernel Equating
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Computing an element in the lexicographic kernel of a game
Faigle, U.; Kern, Walter; Kuipers, Jeroen
The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a
Computing an element in the lexicographic kernel of a game
Faigle, U.; Kern, Walter; Kuipers, J.
2002-01-01
The lexicographic kernel of a game lexicographically maximizes the surplusses $s_{ij}$ (rather than the excesses as would the nucleolus). We show that an element in the lexicographic kernel can be computed efficiently, provided we can efficiently compute the surplusses $s_{ij}(x)$ corresponding to a
International Nuclear Information System (INIS)
Li Fenglian; Wang Yuesheng; Zhang Chuanzeng
2011-01-01
A numerical method based on the Dirichlet-to-Neumann (DtN) map is presented to compute the bandgaps of two-dimensional phononic crystals, which are composed of square or triangular lattices of circular solid cylinders in a fluid matrix. The DtN map is constructed using the cylindrical wave expansion in a unit cell. A linear eigenvalue problem, which depends on the Bloch wave vector and involves relatively small matrices, is formulated. Numerical calculations are performed for typical systems with various acoustic impedance ratios of the solid inclusions and the fluid matrix. The results indicate that the DtN-map based method can provide accurate results for various systems efficiently. In particular it takes into account the fluid-solid interface conditions and the transverse wave mode in the solid component, which has been proven to be significant when the acoustic impedance of the solid inclusions is close to or smaller than that of the fluid matrix. For systems with an acoustic impedance of the inclusion much less than that of the matrix, physical flat bands appear in the band structures, which will be missed if the transverse wave mode in the solid inclusions is neglected.
Hu, Weiming; Tian, Guodong; Kang, Yongxin; Yuan, Chunfeng; Maybank, Stephen
2017-09-25
In this paper, a new nonparametric Bayesian model called the dual sticky hierarchical Dirichlet process hidden Markov model (HDP-HMM) is proposed for mining activities from a collection of time series data such as trajectories. All the time series data are clustered. Each cluster of time series data, corresponding to a motion pattern, is modeled by an HMM. Our model postulates a set of HMMs that share a common set of states (topics in an analogy with topic models for document processing), but have unique transition distributions. For the application to motion trajectory modeling, topics correspond to motion activities. The learnt topics are clustered into atomic activities which are assigned predicates. We propose a Bayesian inference method to decompose a given trajectory into a sequence of atomic activities. On combining the learnt sources and sinks, semantic motion regions, and the learnt sequence of atomic activities, the action represented by the trajectory can be described in natural language in as automatic a way as possible. The effectiveness of our dual sticky HDP-HMM is validated on several trajectory datasets. The effectiveness of the natural language descriptions for motions is demonstrated on the vehicle trajectories extracted from a traffic scene.
International Nuclear Information System (INIS)
Jiang Bin; Zhang Yejing; Wang Yufei; Liu Anjin; Zheng Wanhua
2012-01-01
We present the extended Dirichlet-to-Neumann wave vector eigenvalue equation (DtN-WVEE) method to calculate the equi-frequency contour (EFC) of square lattice photonic crystals (PhCs). With the extended DtN-WVEE method and Snell's law, the effective refractive index of the mode with a circular EFC can be obtained, which is further validated with the refractive index weighted by the electric field or magnetic field. To further verify the EFC calculated by the DtN-WVEE method, the finite-difference time-domain method is also used. Compared with other wave vector eigenvalue equation methods that calculate EFC directly, the size of the eigenmatrix used in the DtN-WVEE method is much smaller, and the computation time is significantly reduced. Since the DtN-WVEE method solves wave vectors for given arbitrary frequencies, it can also find applications in studying the optical properties of a PhC with dispersive, lossy and magnetic materials. (paper)
3-D waveform tomography sensitivity kernels for anisotropic media
Djebbi, Ramzi
2014-01-01
The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Open Problem: Kernel methods on manifolds and metric spaces
DEFF Research Database (Denmark)
Feragen, Aasa; Hauberg, Søren
2016-01-01
Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....
Compactly Supported Basis Functions as Support Vector Kernels for Classification.
Wittek, Peter; Tan, Chew Lim
2011-10-01
Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.
Evaluation of palm kernel fibers (PKFs for production of asbestos-free automotive brake pads
Directory of Open Access Journals (Sweden)
K.K. Ikpambese
2016-01-01
Full Text Available In this study, asbestos-free automotive brake pads produced from palm kernel fibers with epoxy-resin binder was evaluated. Resins varied in formulations and properties such as friction coefficient, wear rate, hardness test, porosity, noise level, temperature, specific gravity, stopping time, moisture effects, surface roughness, oil and water absorptions rates, and microstructure examination were investigated. Other basic engineering properties of mechanical overload, thermal deformation fading behaviour shear strength, cracking resistance, over-heat recovery, and effect on rotor disc, caliper pressure, pad grip effect and pad dusting effect were also investigated. The results obtained indicated that the wear rate, coefficient of friction, noise level, temperature, and stopping time of the produced brake pads increased as the speed increases. The results also show that porosity, hardness, moisture content, specific gravity, surface roughness, and oil and water absorption rates remained constant with increase in speed. The result of microstructure examination revealed that worm surfaces were characterized by abrasion wear where the asperities were ploughed thereby exposing the white region of palm kernel fibers, thus increasing the smoothness of the friction materials. Sample S6 with composition of 40% epoxy-resin, 10% palm wastes, 6% Al2O3, 29% graphite, and 15% calcium carbonate gave better properties. The result indicated that palm kernel fibers can be effectively used as a replacement for asbestos in brake pad production.
Improved modeling of clinical data with kernel methods.
Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart
2012-02-01
Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems
A method for manufacturing kernels of metallic oxides and the thus obtained kernels
International Nuclear Information System (INIS)
Lelievre Bernard; Feugier, Andre.
1973-01-01
A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr
Production and detailed characterization of bio-oil from fast pyrolysis of palm kernel shell
International Nuclear Information System (INIS)
Asadullah, Mohammad; Ab Rasid, Nurul Suhada; Kadir, Sharifah Aishah Syed A.; Azdarpour, Amin
2013-01-01
Bio-oil has been produced from palm kernel shell in a fluidized bed reactor. The process conditions were optimized and the detailed characteristics of bio-oil were carried out. The higher feeding rate and higher gas flow rate attributed to higher bio-oil yield. The maximum mass fraction of biomass (57%) converted to bio-oil at 550 °C when 2 L min −1 of gas and 10 g min −1 of biomass were fed. The bio-oil produced up to 500 °C existed in two distinct phases, while it formed one homogeneous phase when it was produced above 500 °C. The higher heating value of bio-oil produced at 550 °C was found to be 23.48 MJ kg −1 . As GC–MS data shows, the area ratio of phenol is the maximum among the area ratio of identified compounds in 550 °C bio-oil. The UV–Fluorescence absorption, which is the indication of aromatic content, is also the highest in 550 °C bio-oil. -- Highlights: • Maximum 56 wt% yield of bio-oil was obtained at 550 °C from palm kernel shell. • Two layer of bio-oil was observed up to 500 °C, while it was one layer above 500 °C. • Bio-oil from palm kernel shell provides more than 40% area ratio of phenol in GC–MS analysis. • The calorific value of palm kernel shell bio-oil is higher than other bio-oil
Learning molecular energies using localized graph kernels
Ferré, Grégoire; Haut, Terry; Barros, Kipton
2017-03-01
Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.
Asnaashari, Maryam; Hashemi, Seyed Mohammad Bagher; Mehr, Hamed Mahdavian; Yousefabad, Seyed Hossein Asadi
2015-03-01
In this study, in order to introduce natural antioxidative vegetable oil in food industry, the kolkhoung hull oil and kernel oil were extracted. To evaluate their antioxidant efficiency, gas chromatography analysis of the composition of kolkhoung hull and kernel oil fatty acids and high-performance liquid chromatography analysis of tocopherols were done. Also, the oxidative stability of the oil was considered based on the peroxide value and anisidine value during heating at 100, 110 and 120 °C. Gas chromatography analysis showed that oleic acid was the major fatty acid of both types of oil (hull and kernel) and based on a low content of saturated fatty acids, high content of monounsaturated fatty acids, and the ratio of ω-6 and ω-3 polyunsaturated fatty acids, they were nutritionally well--balanced. Moreover, both hull and kernel oil showed high oxidative stability during heating, which can be attributed to high content of tocotrienols. Based on the results, kolkhoung hull oil acted slightly better than its kernel oil. However, both of them can be added to oxidation-sensitive oils to improve their shelf life.
Stochastic subset selection for learning with kernel machines.
Rhinelander, Jason; Liu, Xiaoping P
2012-06-01
Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.
Multiple kernel boosting framework based on information measure for classification
International Nuclear Information System (INIS)
Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun
2016-01-01
The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Ling-Yu Duan
2010-01-01
Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Per-Sample Multiple Kernel Approach for Visual Concept Learning
Directory of Open Access Journals (Sweden)
Tian Yonghong
2010-01-01
Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.
Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.
Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong
2014-01-01
Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
Deep Restricted Kernel Machines Using Conjugate Feature Duality.
Suykens, Johan A K
2017-08-01
The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.
Training Lp norm multiple kernel learning in the primal.
Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei
2013-10-01
Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.
Energy Technology Data Exchange (ETDEWEB)
Sarler, B [Institut Jozef Stefan, Ljubljana (Yugoslavia)
1987-07-01
The basic principles of the boundary element method numerical treatment of the radial flow heat diffusion equation are presented. The algorithm copes the time dependent Dirichlet and Neumann boundary conditions, temperature dependent material properties and regions from different materials in thermal contact. It is verified on the several analytically obtained test cases. The developed method is used for the modelling of unsteady radial heat flow in pressurized water reactor fuel rod. (author)
Gradient-based adaptation of general gaussian kernels.
Glasmachers, Tobias; Igel, Christian
2005-10-01
Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.
On weights which admit the reproducing kernel of Bergman type
Directory of Open Access Journals (Sweden)
Zbigniew Pasternak-Winiarski
1992-01-01
Full Text Available In this paper we consider (1 the weights of integration for which the reproducing kernel of the Bergman type can be defined, i.e., the admissible weights, and (2 the kernels defined by such weights. It is verified that the weighted Bergman kernel has the analogous properties as the classical one. We prove several sufficient conditions and necessary and sufficient conditions for a weight to be an admissible weight. We give also an example of a weight which is not of this class. As a positive example we consider the weight μ(z=(Imz2 defined on the unit disk in ℂ.
Visualization of nonlinear kernel models in neuroimaging by sensitivity maps
DEFF Research Database (Denmark)
Rasmussen, Peter Mondrup; Hansen, Lars Kai; Madsen, Kristoffer Hougaard
There is significant current interest in decoding mental states from neuroimages. In this context kernel methods, e.g., support vector machines (SVM) are frequently adopted to learn statistical relations between patterns of brain activation and experimental conditions. In this paper we focus...... on visualization of such nonlinear kernel models. Specifically, we investigate the sensitivity map as a technique for generation of global summary maps of kernel classification methods. We illustrate the performance of the sensitivity map on functional magnetic resonance (fMRI) data based on visual stimuli. We...
Flour quality and kernel hardness connection in winter wheat
Directory of Open Access Journals (Sweden)
Szabó B. P.
2016-12-01
Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.
Deep kernel learning method for SAR image target recognition
Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao
2017-10-01
With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.
Explicit signal to noise ratio in reproducing kernel Hilbert spaces
DEFF Research Database (Denmark)
Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo
2011-01-01
This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C
Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.
Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition
Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja
2012-01-01
We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an
International Nuclear Information System (INIS)
Drozdowicz, K.
1995-01-01
A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs
A kernel adaptive algorithm for quaternion-valued inputs.
Paul, Thomas K; Ogunfunmi, Tokunbo
2015-10-01
The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.
Bioconversion of palm kernel meal for aquaculture: Experiences ...
African Journals Online (AJOL)
SERVER
2008-04-17
Apr 17, 2008 ... es as well as food supplies have existed traditionally with coastal regions of Liberia and ..... Contamination of palm kernel meal with Aspergillus ... Sciences, Universiti Sains Malaysia, Penang 11800, Malaysia. Aquacult. Res.
The effect of apricot kernel flour incorporation on the ...
African Journals Online (AJOL)
STORAGESEVER
2009-01-05
Jan 5, 2009 ... 2Department of Food Engineering, Erciyes University 38039, Kayseri, Turkey. Accepted 27 ... Key words: Noodle; apricot kernel, flour, cooking, sensory properties. ... their simple preparation requirement, desirable sensory.
3-D waveform tomography sensitivity kernels for anisotropic media
Djebbi, Ramzi; Alkhalifah, Tariq Ali
2014-01-01
The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate
Kernel-based noise filtering of neutron detector signals
International Nuclear Information System (INIS)
Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki
2007-01-01
This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests
Linear and kernel methods for multivariate change detection
DEFF Research Database (Denmark)
Canty, Morton J.; Nielsen, Allan Aasbjerg
2012-01-01
), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...
Resummed memory kernels in generalized system-bath master equations
International Nuclear Information System (INIS)
Mavros, Michael G.; Van Voorhis, Troy
2014-01-01
Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics
On Improving Convergence Rates for Nonnegative Kernel Density Estimators
Terrell, George R.; Scott, David W.
1980-01-01
To improve the rate of decrease of integrated mean square error for nonparametric kernel density estimators beyond $0(n^{-\\frac{4}{5}}),$ we must relax the constraint that the density estimate be a bonafide density function, that is, be nonnegative and integrate to one. All current methods for kernel (and orthogonal series) estimators relax the nonnegativity constraint. In this paper we show how to achieve similar improvement by relaxing the integral constraint only. This is important in appl...
Improved Variable Window Kernel Estimates of Probability Densities
Hall, Peter; Hu, Tien Chung; Marron, J. S.
1995-01-01
Variable window width kernel density estimators, with the width varying proportionally to the square root of the density, have been thought to have superior asymptotic properties. The rate of convergence has been claimed to be as good as those typical for higher-order kernels, which makes the variable width estimators more attractive because no adjustment is needed to handle the negativity usually entailed by the latter. However, in a recent paper, Terrell and Scott show that these results ca...
Graphical analyses of connected-kernel scattering equations
International Nuclear Information System (INIS)
Picklesimer, A.
1982-10-01
Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class
MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX
International Nuclear Information System (INIS)
Brooks, E.D. III
1988-01-01
1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel
The Flux OSKit: A Substrate for Kernel and Language Research
1997-10-01
unclassified Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std Z39-18 tions. Our own microkernel -based OS, Fluke [17], puts almost all of the OSKit to use...kernels distance the language from the hardware; even microkernels and other extensible kernels enforce some default policy which often conflicts with a...be particu- larly useful in these research projects. 6.1.1 The Fluke OS In 1996 we developed an entirely new microkernel - based system called Fluke
Salus: Kernel Support for Secure Process Compartments
Directory of Open Access Journals (Sweden)
Raoul Strackx
2015-01-01
Full Text Available Consumer devices are increasingly being used to perform security and privacy critical tasks. The software used to perform these tasks is often vulnerable to attacks, due to bugs in the application itself or in included software libraries. Recent work proposes the isolation of security-sensitive parts of applications into protected modules, each of which can be accessed only through a predefined public interface. But most parts of an application can be considered security-sensitive at some level, and an attacker who is able to gain inapplication level access may be able to abuse services from protected modules. We propose Salus, a Linux kernel modification that provides a novel approach for partitioning processes into isolated compartments sharing the same address space. Salus significantly reduces the impact of insecure interfaces and vulnerable compartments by enabling compartments (1 to restrict the system calls they are allowed to perform, (2 to authenticate their callers and callees and (3 to enforce that they can only be accessed via unforgeable references. We describe the design of Salus, report on a prototype implementation and evaluate it in terms of security and performance. We show that Salus provides a significant security improvement with a low performance overhead, without relying on any non-standard hardware support.
Local Kernel for Brains Classification in Schizophrenia
Castellani, U.; Rossato, E.; Murino, V.; Bellani, M.; Rambaldelli, G.; Tansella, M.; Brambilla, P.
In this paper a novel framework for brain classification is proposed in the context of mental health research. A learning by example method is introduced by combining local measurements with non linear Support Vector Machine. Instead of considering a voxel-by-voxel comparison between patients and controls, we focus on landmark points which are characterized by local region descriptors, namely Scale Invariance Feature Transform (SIFT). Then, matching is obtained by introducing the local kernel for which the samples are represented by unordered set of features. Moreover, a new weighting approach is proposed to take into account the discriminative relevance of the detected groups of features. Experiments have been performed including a set of 54 patients with schizophrenia and 54 normal controls on which region of interest (ROI) have been manually traced by experts. Preliminary results on Dorso-lateral PreFrontal Cortex (DLPFC) region are promising since up to 75% of successful classification rate has been obtained with this technique and the performance has improved up to 85% when the subjects have been stratified by sex.
KERNEL MAD ALGORITHM FOR RELATIVE RADIOMETRIC NORMALIZATION
Directory of Open Access Journals (Sweden)
Y. Bai
2016-06-01
Full Text Available The multivariate alteration detection (MAD algorithm is commonly used in relative radiometric normalization. This algorithm is based on linear canonical correlation analysis (CCA which can analyze only linear relationships among bands. Therefore, we first introduce a new version of MAD in this study based on the established method known as kernel canonical correlation analysis (KCCA. The proposed method effectively extracts the non-linear and complex relationships among variables. We then conduct relative radiometric normalization experiments on both the linear CCA and KCCA version of the MAD algorithm with the use of Landsat-8 data of Beijing, China, and Gaofen-1(GF-1 data derived from South China. Finally, we analyze the difference between the two methods. Results show that the KCCA-based MAD can be satisfactorily applied to relative radiometric normalization, this algorithm can well describe the nonlinear relationship between multi-temporal images. This work is the first attempt to apply a KCCA-based MAD algorithm to relative radiometric normalization.
An Ensemble Approach to Building Mercer Kernels with Prior Information
Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd
2005-01-01
This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.
A new discrete dipole kernel for quantitative susceptibility mapping.
Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian
2018-09-01
Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.
Exploration of Shorea robusta (Sal seeds, kernels and its oil
Directory of Open Access Journals (Sweden)
Shashi Kumar C.
2016-12-01
Full Text Available Physical, mechanical, and chemical properties of Shorea robusta seed with wing, seed without wing, and kernel were investigated in the present work. The physico-chemical composition of sal oil was also analyzed. The physico-mechanical properties and proximate composition of seed with wing, seed without wing, and kernel at three moisture contents of 9.50% (w.b, 9.54% (w.b, and 12.14% (w.b, respectively, were studied. The results show that the moisture content of the kernel was highest as compared to seed with wing and seed without wing. The sphericity of the kernel was closer to that of a sphere as compared to seed with wing and seed without wing. The hardness of the seed with wing (32.32, N/mm and seed without wing (42.49, N/mm was lower than the kernels (72.14, N/mm. The proximate composition such as moisture, protein, carbohydrates, oil, crude fiber, and ash content were also determined. The kernel (30.20%, w/w contains higher oil percentage as compared to seed with wing and seed without wing. The scientific data from this work are important for designing of equipment and processes for post-harvest value addition of sal seeds.
Omnibus risk assessment via accelerated failure time kernel machine modeling.
Sinnott, Jennifer A; Cai, Tianxi
2013-12-01
Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.
Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code
International Nuclear Information System (INIS)
Rothenstein, W.
1999-01-01
In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T
Proteome analysis of the almond kernel (Prunus dulcis).
Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu
2016-08-01
Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Probabilistic Representations of Solutions to the Heat Equation
Indian Academy of Sciences (India)
In this paper we provide a new (probabilistic) proof of a classical result in partial differential equations, viz. if is a tempered distribution, then the solution of the heat equation for the Laplacian, with initial condition , is given by the convolution of with the heat kernel (Gaussian density). Our results also extend the ...
Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib
2016-01-01
Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and
DEFF Research Database (Denmark)
Petersen, Annette
of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...
Local coding based matching kernel method for image classification.
Directory of Open Access Journals (Sweden)
Yan Song
Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.
Protein fold recognition using geometric kernel data fusion.
Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves
2014-07-01
Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.
Generalized synthetic kernel approximation for elastic moderation of fast neutrons
International Nuclear Information System (INIS)
Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.
1975-01-01
A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)
Unsupervised multiple kernel learning for heterogeneous data integration.
Mariette, Jérôme; Villa-Vialaneix, Nathalie
2018-03-15
Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.
Collision kernels in the eikonal approximation for Lennard-Jones interaction potential
International Nuclear Information System (INIS)
Zielinska, S.
1985-03-01
The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)
Digestibility of solvent-treated Jatropha curcas kernel by broiler chickens in Senegal.
Nesseim, Thierry Daniel Tamsir; Dieng, Abdoulaye; Mergeai, Guy; Ndiaye, Saliou; Hornick, Jean-Luc
2015-12-01
Jatropha curcas is a drought-resistant shrub belonging to the Euphorbiaceae family. The kernel contains approximately 60 % lipid in dry matter, and the meal obtained after oil extraction could be an exceptional source of protein for family poultry farming, in the absence of curcin and, especially, some diterpene derivatives phorbol esters that are partially lipophilic. The nutrient digestibility of J. curcas kernel meal (JKM), obtained after partial physicochemical deoiling was thus evaluated in broiler chickens. Twenty broiler chickens, 6 weeks old, were maintained in individual metabolic cages and divided into four groups of five animals, according to a 4 × 4 Latin square design where deoiled JKM was incorporated into grinded corn at 0, 4, 8, and 12 % levels (diets 0, 4, 8, and 12 J), allowing measurement of nutrient digestibility by the differential method. The dry matter (DM) and organic matter (OM) digestibility of diets was affected to a low extent by JKM (85 and 86 % in 0 J and 81 % in 12 J, respectively) in such a way that DM and OM digestibility of JKM was estimated to be close to 50 %. The ether extract (EE) digestibility of JKM remained high, at about 90 %, while crude protein (CP) and crude fiber (CF) digestibility were largely impacted by JKM, with values closed to 40 % at the highest levels of incorporation. J. curcas kernel presents various nutrient digestibilities but has adverse effects on CP and CF digestibility of the diet. The effects of an additional heat or biological treatment on JKM remain to be assessed.
Bivariate discrete beta Kernel graduation of mortality data.
Mazza, Angelo; Punzo, Antonio
2015-07-01
Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.
Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.
Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao
2017-06-21
In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.
Mixed kernel function support vector regression for global sensitivity analysis
Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng
2017-11-01
Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.
On flame kernel formation and propagation in premixed gases
Energy Technology Data Exchange (ETDEWEB)
Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2010-12-15
Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)
Insights from Classifying Visual Concepts with Multiple Kernel Learning
Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki
2012-01-01
Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970
Semi-supervised learning for ordinal Kernel Discriminant Analysis.
Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C
2016-12-01
Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.
Kernel Methods for Mining Instance Data in Ontologies
Bloehdorn, Stephan; Sure, York
The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.
Semisupervised kernel marginal Fisher analysis for face recognition.
Wang, Ziqiang; Sun, Xia; Sun, Lijun; Huang, Yuchun
2013-01-01
Dimensionality reduction is a key problem in face recognition due to the high-dimensionality of face image. To effectively cope with this problem, a novel dimensionality reduction algorithm called semisupervised kernel marginal Fisher analysis (SKMFA) for face recognition is proposed in this paper. SKMFA can make use of both labelled and unlabeled samples to learn the projection matrix for nonlinear dimensionality reduction. Meanwhile, it can successfully avoid the singularity problem by not calculating the matrix inverse. In addition, in order to make the nonlinear structure captured by the data-dependent kernel consistent with the intrinsic manifold structure, a manifold adaptive nonparameter kernel is incorporated into the learning process of SKMFA. Experimental results on three face image databases demonstrate the effectiveness of our proposed algorithm.
Capturing Option Anomalies with a Variance-Dependent Pricing Kernel
DEFF Research Database (Denmark)
Christoffersen, Peter; Heston, Steven; Jacobs, Kris
2013-01-01
We develop a GARCH option model with a new pricing kernel allowing for a variance premium. While the pricing kernel is monotonic in the stock return and in variance, its projection onto the stock return is nonmonotonic. A negative variance premium makes it U shaped. We present new semiparametric...... evidence to confirm this U-shaped relationship between the risk-neutral and physical probability densities. The new pricing kernel substantially improves our ability to reconcile the time-series properties of stock returns with the cross-section of option prices. It provides a unified explanation...... for the implied volatility puzzle, the overreaction of long-term options to changes in short-term variance, and the fat tails of the risk-neutral return distribution relative to the physical distribution....
Weighted Feature Gaussian Kernel SVM for Emotion Recognition.
Wei, Wei; Jia, Qingxuan
2016-01-01
Emotion recognition with weighted feature based on facial expression is a challenging research topic and has attracted great attention in the past few years. This paper presents a novel method, utilizing subregion recognition rate to weight kernel function. First, we divide the facial expression image into some uniform subregions and calculate corresponding recognition rate and weight. Then, we get a weighted feature Gaussian kernel function and construct a classifier based on Support Vector Machine (SVM). At last, the experimental results suggest that the approach based on weighted feature Gaussian kernel function has good performance on the correct rate in emotion recognition. The experiments on the extended Cohn-Kanade (CK+) dataset show that our method has achieved encouraging recognition results compared to the state-of-the-art methods.
Rational kernels for Arabic Root Extraction and Text Classification
Directory of Open Access Journals (Sweden)
Attia Nehar
2016-04-01
Full Text Available In this paper, we address the problems of Arabic Text Classification and root extraction using transducers and rational kernels. We introduce a new root extraction approach on the basis of the use of Arabic patterns (Pattern Based Stemmer. Transducers are used to model these patterns and root extraction is done without relying on any dictionary. Using transducers for extracting roots, documents are transformed into finite state transducers. This document representation allows us to use and explore rational kernels as a framework for Arabic Text Classification. Root extraction experiments are conducted on three word collections and yield 75.6% of accuracy. Classification experiments are done on the Saudi Press Agency dataset and N-gram kernels are tested with different values of N. Accuracy and F1 report 90.79% and 62.93% respectively. These results show that our approach, when compared with other approaches, is promising specially in terms of accuracy and F1.
Energy Technology Data Exchange (ETDEWEB)
Jeong, Kyung Chai; Eom, Sung Ho; Kim, Yeon Ku; Yeo, Seoung Hwan; Kim, Young Min; Cho, Moon Sung [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)
2016-10-15
VHTR (Very High Temperature Gas Reactor) fuel technology is being actively developed in the US, China, Japan, and Korea for a Next Generation Nuclear Plant (NGNP). The concept of fuel of a VHTR is based on a sphere kernel of UO{sub 2} or UCO, with multiple coating layers to create a gas-tight particle. The fuel particle of a VHTR in the US is based on microspheres containing a UCO, mixture compound of UO{sub 2} and UC{sub 2} , coated particles with multi carbon layers and a SiC layer. This was first prepared through an internal gelation method at ORNL in the late 1970s. This study presents; (1) C-ADU gel particles were prepared using a modified sol-gel process. The particles fabricated with a KAERI-established gelation and AWD processes showed good sphericity and no cracks were found on the surfaces. (2) High temperature rotating furnace was designed and fabricated in our laboratory, and the maximum operation temperature was about 2000℃. The furnace was equipped with Mo crucible and graphite heating system, and now it is being operated. (3) Well-prepared C-ADU gel particles were converted into UCO compounds using high temperature rotating furnace, and the physical properties of the UCO kernels will be analyzed.
A multi-label learning based kernel automatic recommendation method for support vector machine.
Zhang, Xueying; Song, Qinbao
2015-01-01
Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.
Broken rice kernels and the kinetics of rice hydration and texture during cooking.
Saleh, Mohammed; Meullenet, Jean-Francois
2013-05-01
During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.
Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR
International Nuclear Information System (INIS)
Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee
2011-01-01
The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application
Theoretical developments for interpreting kernel spectral clustering from alternative viewpoints
Directory of Open Access Journals (Sweden)
Diego Peluffo-Ordóñez
2017-08-01
Full Text Available To perform an exploration process over complex structured data within unsupervised settings, the so-called kernel spectral clustering (KSC is one of the most recommended and appealing approaches, given its versatility and elegant formulation. In this work, we explore the relationship between (KSC and other well-known approaches, namely normalized cut clustering and kernel k-means. To do so, we first deduce a generic KSC model from a primal-dual formulation based on least-squares support-vector machines (LS-SVM. For experiments, KSC as well as other consider methods are assessed on image segmentation tasks to prove their usability.
Graphical analyses of connected-kernel scattering equations
International Nuclear Information System (INIS)
Picklesimer, A.
1983-01-01
Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class
Reproducing Kernel Method for Solving Nonlinear Differential-Difference Equations
Directory of Open Access Journals (Sweden)
Reza Mokhtari
2012-01-01
Full Text Available On the basis of reproducing kernel Hilbert spaces theory, an iterative algorithm for solving some nonlinear differential-difference equations (NDDEs is presented. The analytical solution is shown in a series form in a reproducing kernel space, and the approximate solution , is constructed by truncating the series to terms. The convergence of , to the analytical solution is also proved. Results obtained by the proposed method imply that it can be considered as a simple and accurate method for solving such differential-difference problems.
Kernel and divergence techniques in high energy physics separations
Bouř, Petr; Kůs, Václav; Franc, Jiří
2017-10-01
Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.
Rebootless Linux Kernel Patching with Ksplice Uptrack at BNL
International Nuclear Information System (INIS)
Hollowell, Christopher; Pryor, James; Smith, Jason
2012-01-01
Ksplice/Oracle Uptrack is a software tool and update subscription service which allows system administrators to apply security and bug fix patches to the Linux kernel running on servers/workstations without rebooting them. The RHIC/ATLAS Computing Facility (RACF) at Brookhaven National Laboratory (BNL) has deployed Uptrack on nearly 2,000 hosts running Scientific Linux and Red Hat Enterprise Linux. The use of this software has minimized downtime, and increased our security posture. In this paper, we provide an overview of Ksplice's rebootless kernel patch creation/insertion mechanism, and our experiences with Uptrack.
Employment of kernel methods on wind turbine power performance assessment
DEFF Research Database (Denmark)
Skrimpas, Georgios Alexandros; Sweeney, Christian Walsted; Marhadi, Kun S.
2015-01-01
A power performance assessment technique is developed for the detection of power production discrepancies in wind turbines. The method employs a widely used nonparametric pattern recognition technique, the kernel methods. The evaluation is based on the trending of an extracted feature from...... the kernel matrix, called similarity index, which is introduced by the authors for the first time. The operation of the turbine and consequently the computation of the similarity indexes is classified into five power bins offering better resolution and thus more consistent root cause analysis. The accurate...
Sparse kernel orthonormalized PLS for feature extraction in large datasets
DEFF Research Database (Denmark)
Arenas-García, Jerónimo; Petersen, Kaare Brandt; Hansen, Lars Kai
2006-01-01
In this paper we are presenting a novel multivariate analysis method for large scale problems. Our scheme is based on a novel kernel orthonormalized partial least squares (PLS) variant for feature extraction, imposing sparsity constrains in the solution to improve scalability. The algorithm...... is tested on a benchmark of UCI data sets, and on the analysis of integrated short-time music features for genre prediction. The upshot is that the method has strong expressive power even with rather few features, is clearly outperforming the ordinary kernel PLS, and therefore is an appealing method...
Directory of Open Access Journals (Sweden)
Chuang Lin
2015-01-01
Full Text Available Kernel Locality Preserving Projection (KLPP algorithm can effectively preserve the neighborhood structure of the database using the kernel trick. We have known that supervised KLPP (SKLPP can preserve within-class geometric structures by using label information. However, the conventional SKLPP algorithm endures the kernel selection which has significant impact on the performances of SKLPP. In order to overcome this limitation, a method named supervised kernel optimized LPP (SKOLPP is proposed in this paper, which can maximize the class separability in kernel learning. The proposed method maps the data from the original space to a higher dimensional kernel space using a data-dependent kernel. The adaptive parameters of the data-dependent kernel are automatically calculated through optimizing an objective function. Consequently, the nonlinear features extracted by SKOLPP have larger discriminative ability compared with SKLPP and are more adaptive to the input data. Experimental results on ORL, Yale, AR, and Palmprint databases showed the effectiveness of the proposed method.
Aspergillus flavus and Fusarium verticillioides infect maize kernels and contaminate them with the mycotoxins aflatoxin and fumonisin, respectively. Combined histological examination of fungal colonization and transcriptional changes in maize kernels at 4, 12, 24, 48, and 72 hours post inoculation (...
Directory of Open Access Journals (Sweden)
Hailun Wang
2017-01-01
Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.
International Nuclear Information System (INIS)
Atta Muhammad Arif; Irman Javed; Muhammad Abdullah; Muhammad Irman; Athar Mahmud; Muhammad Nadeem; Muhammad Ayaz
2016-01-01
Chemical characteristics of blends of palm oil and mango kernel oil for their probable use as vanaspati was studied. Crude mango kernel oil was blended with refined, bleached and deodorised palm oil from 10 %, 20 %, 30 %, 40 % and 50 % (T 1 , T 2 , T 3 , T 4 and T 5 ) market vanaspati was used as control. Concentration of trans fatty acids in control was 22.7 %, whereas, all the vanaspati samples were virtually trans-free. Slip melting points (degree Celsius) of control, T 1 , T 2 , T 3 , T 4 and T 5 were 37.5, 37.3, 36.4, 35.6, 34.8 and 34. Free fatty acids of control and T5 were respectively 0.11, 0.12 %. Polymer contents of control, T 1 , T 2 , T 3 , T 4 and T 5 , after three heating cycles (18 degree Celsius, for 8 hr) were 21.55 %, 20.97 %, 18.66 %, 17.61 % and 10.22 %, respectively with lower solid fat index (p<0.05). Blends of mango kernel oil and palm oil can be used for the formulation of trans-free vanaspati. (author)
DEFF Research Database (Denmark)
Tvedebrink, Torben; Eriksen, Poul Svante; Morling, Niels
2015-01-01
In this paper, we discuss the construction of a multivariate generalisation of the Dirichlet-multinomial distribution. An example from forensic genetics in the statistical analysis of DNA mixtures motivates the study of this multivariate extension. In forensic genetics, adjustment of the match...... probabilities due to remote ancestry in the population is often done using the so-called θ-correction. This correction increases the probability of observing multiple copies of rare alleles in a subpopulation and thereby reduces the weight of the evidence for rare genotypes. A recent publication by Cowell et al....... (2015) showed elegantly how to use Bayesian networks for efficient computations of likelihood ratios in a forensic genetic context. However, their underlying population genetic model assumed independence of alleles, which is not realistic in real populations. We demonstrate how the so-called θ...
International Nuclear Information System (INIS)
Brown, Eric G.; Louko, Jorma
2015-01-01
We present and utilize a simple formalism for the smooth creation of boundary conditions within relativistic quantum field theory. We consider a massless scalar field in (1+1)-dimensional flat spacetime and imagine smoothly transitioning from there being no boundary condition to there being a two-sided Dirichlet mirror. The act of doing this, expectantly, generates a flux of real quanta that emanates from the mirror as it is being created. We show that the local stress-energy tensor of the flux is finite only if an infrared cutoff is introduced, no matter how slowly the mirror is created, in agreement with the perturbative results of Obadia and Parentani. In the limit of instaneous mirror creation the total energy injected into the field becomes ultraviolet divergent, but the response of an Unruh-DeWitt particle detector passing through the infinite burst of energy nevertheless remains finite. Implications for vacuum entanglement extraction and for black hole firewalls are discussed.
Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations
International Nuclear Information System (INIS)
Carter, L.L.; Hendricks, J.S.
1983-01-01
The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays
The dipole form of the gluon part of the BFKL kernel
International Nuclear Information System (INIS)
Fadin, V.S.; Fiore, R.; Grabovsky, A.V.; Papa, A.
2007-01-01
The dipole form of the gluon part of the color singlet BFKL kernel in the next-to-leading order (NLO) is obtained in the coordinate representation by direct transfer from the momentum representation, where the kernel was calculated before. With this paper the transformation of the NLO BFKL kernel to the dipole form, started a few months ago with the quark part of the kernel, is completed
Directory of Open Access Journals (Sweden)
Hjalmar Rosengren
2006-12-01
Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.
Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.
Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit
2018-02-13
Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Abdulhameed, M.; Vieru, D.; Roslan, R.
2017-10-01
This paper investigates the electro-magneto-hydrodynamic flow of the non-Newtonian behavior of biofluids, with heat transfer, through a cylindrical microchannel. The fluid is acted by an arbitrary time-dependent pressure gradient, an external electric field and an external magnetic field. The governing equations are considered as fractional partial differential equations based on the Caputo-Fabrizio time-fractional derivatives without singular kernel. The usefulness of fractional calculus to study fluid flows or heat and mass transfer phenomena was proven. Several experimental measurements led to conclusion that, in such problems, the models described by fractional differential equations are more suitable. The most common time-fractional derivative used in Continuum Mechanics is Caputo derivative. However, two disadvantages appear when this derivative is used. First, the definition kernel is a singular function and, secondly, the analytical expressions of the problem solutions are expressed by generalized functions (Mittag-Leffler, Lorenzo-Hartley, Robotnov, etc.) which, generally, are not adequate to numerical calculations. The new time-fractional derivative Caputo-Fabrizio, without singular kernel, is more suitable to solve various theoretical and practical problems which involve fractional differential equations. Using the Caputo-Fabrizio derivative, calculations are simpler and, the obtained solutions are expressed by elementary functions. Analytical solutions of the biofluid velocity and thermal transport are obtained by means of the Laplace and finite Hankel transforms. The influence of the fractional parameter, Eckert number and Joule heating parameter on the biofluid velocity and thermal transport are numerically analyzed and graphic presented. This fact can be an important in Biochip technology, thus making it possible to use this analysis technique extremely effective to control bioliquid samples of nanovolumes in microfluidic devices used for biological
Trends and Effective Use of Energy Input in the Palm Kernel Oil Mills
Directory of Open Access Journals (Sweden)
Bamgboye, AI.
2007-01-01
Full Text Available This work aims at studying the importance and the efficiency of energy use in a few palm kernel oil mills selected for their representativity. Pattern of energy use, the cost of energy per unit product, energy intensity and normalized performance indicator (NPI were determined. Results show that the medium and the large mills depend largely on fossil fuel; while the small mill depends on electricity. It was found out that the large mill has the most effective use of energy with high energy intensity. The annual cost of energy per unit product of N8,360,000 ($64,307.69; N12,262,250 ($94,325 and N13,353,870 ($102, 722.08 were obtained for small, medium and large mills respectively. The NPI results show that there was no wastage of energy through space heating in energy supplied for production within the factory site.
International Nuclear Information System (INIS)
Anuar Kassim; Nadarajan, Rathidevi; Mohd Zaizi Desa; Atan Mohd Sharif; Dzulkefly Kuang; Mohd Jelas Haron
2007-01-01
Dihydroxystearic acid (DHSA) and octyl dihydroxystearate (DHSA-octyl ester) have been previously prepared from palm oleic acid and preliminary results showed that these compounds are suitable in personal care and cosmetics products. The objective of this research is to study the phase behavior in ternary system of DHSA/DHSA-octyl ester/RBD Palm Kernel Olein (RBDPKOo) and MCT at 85 degree Celsius. From the ternary phase diagram, results showed that all ratios of were completely in two-phase region with various concentrations of RBDPKOo and MCT. The phase changes were observed through polarizing light while the formation of texture was confirmed using polarizing microscope combination with heating. Needles and spherulite textures were found in this system. (author)
Flexible Scheduling by Deadline Inheritance in Soft Real Time Kernels
Jansen, P.G.; Wygerink, Emiel
1996-01-01
Current Hard Real Time (HRT) kernels have their timely behaviour guaranteed on the cost of a rather restrictive use of the available resources. This makes HRT scheduling techniques inadequate for use in Soft Real Time (SRT) environment where we can make a considerable profit by a better and more
MARMER, a flexible point-kernel shielding code
International Nuclear Information System (INIS)
Kloosterman, J.L.; Hoogenboom, J.E.
1990-01-01
A point-kernel shielding code entitled MARMER is described. It has several options with respect to geometry input, source description and detector point description which extend the flexibility and usefulness of the code, and which are especially useful in spent fuel shielding. MARMER has been validated using the TN12 spent fuel shipping cask benchmark. (author)
MARMER, a flexible point-kernel shielding code
Energy Technology Data Exchange (ETDEWEB)
Kloosterman, J.L.; Hoogenboom, J.E. (Interuniversitair Reactor Inst., Delft (Netherlands))
1990-01-01
A point-kernel shielding code entitled MARMER is described. It has several options with respect to geometry input, source description and detector point description which extend the flexibility and usefulness of the code, and which are especially useful in spent fuel shielding. MARMER has been validated using the TN12 spent fuel shipping cask benchmark. (author).
Mycological deterioration of stored palm kernels recovered from oil ...
African Journals Online (AJOL)
Palm kernels obtained from Pioneer Oil Mill Ltd. were stored for eight (8) weeks and examined for their microbiological quality and proximate composition. Seven (7) different fungal species were isolated by serial dilution plate technique. The fungal species included Aspergillus flavus Link; A nidulans Eidem; A niger ...