WorldWideScience

Sample records for non-diagonal energy kernels

  1. Nonlinear Spinor Field in Non-Diagonal Bianchi Type Space-Time

    Directory of Open Access Journals (Sweden)

    Saha Bijan

    2018-01-01

    Full Text Available Within the scope of the non-diagonal Bianchi cosmological models we have studied the role of the spinor field in the evolution of the Universe. In the non-diagonal Bianchi models the spinor field distribution along the main axis is anisotropic and does not vanish in the absence of the spinor field nonlinearity. Hence within these models perfect fluid, dark energy etc. cannot be simulated by the spinor field nonlinearity. The equation for volume scale V in the case of non-diagonal Bianchi models contains a term with first derivative of V explicitly and does not allow exact solution by quadratures. Like the diagonal models the non-diagonal Bianchi space-time becomes locally rotationally symmetric even in the presence of a spinor field. It was found that depending on the sign of the coupling constant the model allows either an open Universe that rapidly grows up or a close Universe that ends in a Big Crunch singularity.

  2. A Non-Local, Energy-Optimized Kernel: Recovering Second-Order Exchange and Beyond in Extended Systems

    Science.gov (United States)

    Bates, Jefferson; Laricchia, Savio; Ruzsinszky, Adrienn

    The Random Phase Approximation (RPA) is quickly becoming a standard method beyond semi-local Density Functional Theory that naturally incorporates weak interactions and eliminates self-interaction error. RPA is not perfect, however, and suffers from self-correlation error as well as an incorrect description of short-ranged correlation typically leading to underbinding. To improve upon RPA we introduce a short-ranged, exchange-like kernel that is one-electron self-correlation free for one and two electron systems in the high-density limit. By tuning the one free parameter in our model to recover an exact limit of the homogeneous electron gas correlation energy we obtain a non-local, energy-optimized kernel that reduces the errors of RPA for both homogeneous and inhomogeneous solids. To reduce the computational cost of the standard kernel-corrected RPA, we also implement RPA renormalized perturbation theory for extended systems, and demonstrate its capability to describe the dominant correlation effects with a low-order expansion in both metallic and non-metallic systems. Furthermore we stress that for norm-conserving implementations the accuracy of RPA and beyond RPA structural properties compared to experiment is inherently limited by the choice of pseudopotential. Current affiliation: King's College London.

  3. Feature fusion using kernel joint approximate diagonalization of eigen-matrices for rolling bearing fault identification

    Science.gov (United States)

    Liu, Yongbin; He, Bing; Liu, Fang; Lu, Siliang; Zhao, Yilei

    2016-12-01

    Fault pattern identification is a crucial step for the intelligent fault diagnosis of real-time health conditions in monitoring a mechanical system. However, many challenges exist in extracting the effective feature from vibration signals for fault recognition. A new feature fusion method is proposed in this study to extract new features using kernel joint approximate diagonalization of eigen-matrices (KJADE). In the method, the input space that is composed of original features is mapped into a high-dimensional feature space by nonlinear mapping. Then, the new features can be estimated through the eigen-decomposition of the fourth-order cumulative kernel matrix obtained from the feature space. Therefore, the proposed method could be used to reduce data redundancy because it extracts the inherent pattern structure of different fault classes as it is nonlinear by nature. The integration evaluation factor of between-class and within-class scatters (SS) is employed to depict the clustering performance quantitatively, and the new feature subset extracted by the proposed method is fed into a multi-class support vector machine for fault pattern identification. Finally, the effectiveness of the proposed method is verified by experimental vibration signals with different bearing fault types and severities. Results of several cases show that the KJADE algorithm is efficient in feature fusion for bearing fault identification.

  4. Diagonalizing quadratic bosonic operators by non-autonomous flow equations

    CERN Document Server

    Bach, Volker

    2016-01-01

    The authors study a non-autonomous, non-linear evolution equation on the space of operators on a complex Hilbert space. They specify assumptions that ensure the global existence of its solutions and allow them to derive its asymptotics at temporal infinity. They demonstrate that these assumptions are optimal in a suitable sense and more general than those used before. The evolution equation derives from the Brocketâe"Wegner flow that was proposed to diagonalize matrices and operators by a strongly continuous unitary flow. In fact, the solution of the non-linear flow equation leads to a diagonalization of Hamiltonian operators in boson quantum field theory which are quadratic in the field.

  5. Non-diagonal processes of singlet and ordinary quark production

    International Nuclear Information System (INIS)

    Bejlin, V.A.; Vereshkov, G.M.; Kuksa, V.I.

    1995-01-01

    Non-diagonal processes of singlet and ordinary quark production are analyzed in the model where the down singlet quark mixes with the ordinary ones. The possibility of experimental selection of h-quark effects is demonstrated

  6. Non-separable pairing interaction kernels applied to superconducting cuprates

    International Nuclear Information System (INIS)

    Haley, Stephen B.; Fink, Herman J.

    2014-01-01

    Highlights: • Non-separable interaction kernels with weak interactions produces HTS. • A probabilistic approach is used in filling the electronic states in the unit cell. • A set of coupled equations is derived which describes the energy gap. • SC properties of separable with non-separable interactions are compared. • There is agreement with measured properties of the SC and normal states. - Abstract: A pairing Hamiltonian H(Γ) with a non-separable interaction kernel Γ produces HTS for relatively weak interactions. The doping and temperature dependence of Γ(x,T) and the chemical potential μ(x) is determined by a probabilistic filling of the electronic states in the cuprate unit cell. A diverse set of HTS and normal state properties is examined, including the SC phase transition boundary T C (x), SC gap Δ(x,T), entropy S(x,T), specific heat C(x,T), and spin susceptibility χ s (x,T). Detailed x,T agreement with cuprate experiment is obtained for all properties

  7. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  8. Nonconformal scalar field in uniform isotropic space and the method of Hamiltonian diagonalization

    International Nuclear Information System (INIS)

    Pavlov, Yu.V.

    2001-01-01

    One diagonalized metric Hamiltonian of scalar field with arbitrary relation with curvature in N-dimensional uniform isotropic space. One derived spectrum of energies of the appropriate quasi-particles. One calculated energy of quasi-particle appropriate to the canonical Hamiltonian diagonal shape. One structured a modified tensor of energy-pulse with the following features. In case of conformal scalar field it coincides with the metric tensor of energy-pulse. When it is diagonalized the energies of the appropriate particles of nonconformal field are equal to oscillation frequency and the number of such particles produced in non-stationary metric is the finite one. It is shown that Hamiltonian calculated on the basis of the modified tensor of energy-pulse may be derived as a canonical one at certain selection of variables [ru

  9. An asymptotic expression for the eigenvalues of the normalization kernel of the resonating group method

    International Nuclear Information System (INIS)

    Lomnitz-Adler, J.; Brink, D.M.

    1976-01-01

    A generating function for the eigenvalues of the RGM Normalization Kernel is expressed in terms of the diagonal matrix elements of thw GCM Overlap Kernel. An asymptotic expression for the eigenvalues is obtained by using the Method of Steepest Descent. (Auth.)

  10. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    Energy Technology Data Exchange (ETDEWEB)

    Pieper, Andreas [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Kreutzer, Moritz [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany); Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Galgon, Martin [Bergische Universität Wuppertal (Germany); Fehske, Holger [Ernst-Moritz-Arndt-Universität Greifswald (Germany); Hager, Georg [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany); Lang, Bruno [Bergische Universität Wuppertal (Germany); Wellein, Gerhard [Friedrich-Alexander-Universität Erlangen-Nürnberg (Germany)

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need for matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.

  11. Dose calculation methods in photon beam therapy using energy deposition kernels

    International Nuclear Information System (INIS)

    Ahnesjoe, A.

    1991-01-01

    The problem of calculating accurate dose distributions in treatment planning of megavoltage photon radiation therapy has been studied. New dose calculation algorithms using energy deposition kernels have been developed. The kernels describe the transfer of energy by secondary particles from a primary photon interaction site to its surroundings. Monte Carlo simulations of particle transport have been used for derivation of kernels for primary photon energies form 0.1 MeV to 50 MeV. The trade off between accuracy and calculational speed has been addressed by the development of two algorithms; one point oriented with low computional overhead for interactive use and one for fast and accurate calculation of dose distributions in a 3-dimensional lattice. The latter algorithm models secondary particle transport in heterogeneous tissue by scaling energy deposition kernels with the electron density of the tissue. The accuracy of the methods has been tested using full Monte Carlo simulations for different geometries, and found to be superior to conventional algorithms based on scaling of broad beam dose distributions. Methods have also been developed for characterization of clinical photon beams in entities appropriate for kernel based calculation models. By approximating the spectrum as laterally invariant, an effective spectrum and dose distribution for contaminating charge particles are derived form depth dose distributions measured in water, using analytical constraints. The spectrum is used to calculate kernels by superposition of monoenergetic kernels. The lateral energy fluence distribution is determined by deconvolving measured lateral dose distributions by a corresponding pencil beam kernel. Dose distributions for contaminating photons are described using two different methods, one for estimation of the dose outside of the collimated beam, and the other for calibration of output factors derived from kernel based dose calculations. (au)

  12. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  13. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  14. INFORMATIVE ENERGY METRIC FOR SIMILARITY MEASURE IN REPRODUCING KERNEL HILBERT SPACES

    Directory of Open Access Journals (Sweden)

    Songhua Liu

    2012-02-01

    Full Text Available In this paper, information energy metric (IEM is obtained by similarity computing for high-dimensional samples in a reproducing kernel Hilbert space (RKHS. Firstly, similar/dissimilar subsets and their corresponding informative energy functions are defined. Secondly, IEM is proposed for similarity measure of those subsets, which converts the non-metric distances into metric ones. Finally, applications of this metric is introduced, such as classification problems. Experimental results validate the effectiveness of the proposed method.

  15. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  16. Novel Diagonal Reloading Based Direction of Arrival Estimation in Unknown Non-Uniform Noise

    Directory of Open Access Journals (Sweden)

    Hao Zhou

    2018-01-01

    Full Text Available Nested array can expand the degrees of freedom (DOF from difference coarray perspective, but suffering from the performance degradation of direction of arrival (DOA estimation in unknown non-uniform noise. In this paper, a novel diagonal reloading (DR based DOA estimation algorithm is proposed using a recently developed nested MIMO array. The elements in the main diagonal of the sample covariance matrix are eliminated; next the smallest MN-K eigenvalues of the revised matrix are obtained and averaged to estimate the sum value of the signal power. Further the estimated sum value is filled into the main diagonal of the revised matrix for estimating the signal covariance matrix. In this case, the negative effect of noise is eliminated without losing the useful information of the signal matrix. Besides, the degrees of freedom are expanded obviously, resulting in the performance improvement. Several simulations are conducted to demonstrate the effectiveness of the proposed algorithm.

  17. Performance Study of Diagonally Segmented Piezoelectric Vibration Energy Harvester

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jae Eun [Catholic Univ. of Daegu, Daegu (Korea, Republic of)

    2013-08-15

    This study proposes a piezoelectric vibration energy harvester composed of two diagonally segmented energy harvesting units. An auxiliary structural unit is attached to the tip of a host structural unit cantilevered to a vibrating base, where the two components have beam axes in opposite directions from each other and matched short-circuit resonant frequencies. Contrary to the usual observations in two resonant frequency-matched structures, the proposed structure shows little eigenfrequency separation and yields a mode sequence change between the first two modes. These lead to maximum power generation around a specific frequency. By using commercial finite element software, it is shown that the magnitude of the output power from the proposed vibration energy harvester can be substantially improved in comparison with those from conventional cantilevered energy harvesters with the same footprint area and magnitude of a tip mass.

  18. A Fourier-series-based kernel-independent fast multipole method

    International Nuclear Information System (INIS)

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-01-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  19. From GCM energy kernels to Weyl-Wigner Hamiltonians: a particular mapping

    International Nuclear Information System (INIS)

    Galetti, D.

    1984-01-01

    A particular mapping is established which directly connects GCM energy kernels to Weyl-Wigner Hamiltonians, under the assumption of gaussian overlap kernel. As an application of this mapping scheme the collective Hamiltonians for some giant resonances are derived. (Author) [pt

  20. Analysis of the cable equation with non-local and non-singular kernel fractional derivative

    Science.gov (United States)

    Karaagac, Berat

    2018-02-01

    Recently a new concept of differentiation was introduced in the literature where the kernel was converted from non-local singular to non-local and non-singular. One of the great advantages of this new kernel is its ability to portray fading memory and also well defined memory of the system under investigation. In this paper the cable equation which is used to develop mathematical models of signal decay in submarine or underwater telegraphic cables will be analysed using the Atangana-Baleanu fractional derivative due to the ability of the new fractional derivative to describe non-local fading memory. The existence and uniqueness of the more generalized model is presented in detail via the fixed point theorem. A new numerical scheme is used to solve the new equation. In addition, stability, convergence and numerical simulations are presented.

  1. Gradient $L^q$ theory for a class of non-diagonal nonlinear elliptic systems

    Czech Academy of Sciences Publication Activity Database

    Bulíček, M.; Kalousek, M.; Kaplický, P.; Mácha, Václav

    2018-01-01

    Roč. 171, June (2018), s. 156-169 ISSN 0362-546X R&D Projects: GA ČR GA16-03230S Institutional support: RVO:67985840 Keywords : regularity * gradient estimates * non-diagonal elliptic systems Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.192, year: 2016 https://www.sciencedirect.com/science/ article /pii/S0362546X18300385

  2. Gradient $L^q$ theory for a class of non-diagonal nonlinear elliptic systems

    Czech Academy of Sciences Publication Activity Database

    Bulíček, M.; Kalousek, M.; Kaplický, P.; Mácha, Václav

    2018-01-01

    Roč. 171, June (2018), s. 156-169 ISSN 0362-546X R&D Projects: GA ČR GA16-03230S Institutional support: RVO:67985840 Keywords : regularity * gradient estimates * non-diagonal elliptic systems Subject RIV: BA - General Mathematics OBOR OECD: Pure mathematics Impact factor: 1.192, year: 2016 https://www.sciencedirect.com/science/article/pii/S0362546X18300385

  3. Does Illumination of Non-Mature Cereal Kernels During Drying Affect the Germination Ability?

    Directory of Open Access Journals (Sweden)

    Małuszyńska Elżbieta

    2016-06-01

    the germination ability of non-mature kernels depends on all studied factors: lighting during drying, terms of harvesting and the interaction light * term;non mature kernelsare more sensitive to drying conditions;lighting during seeds drying can have a positive effect on ability to germination;for breeding practice it would be better to harvest kernels at 23 DAF and dry them at room conditions under incandescent lamp.

  4. Chaos in non-diagonal spatially homogeneous cosmological models in spacetime dimensions <=10

    Science.gov (United States)

    Demaret, Jacques; de Rop, Yves; Henneaux, Marc

    1988-08-01

    It is shown that the chaotic oscillatory behaviour, absent in diagonal homogeneous cosmological models in spacetime dimensions between 5 and 10, can be reestablished when off-diagonal terms are included. Also at Centro de Estudios Cientificos de Santiago, Casilla 16443, Santiago 9, Chile

  5. Study on Energy Productivity Ratio (EPR) at palm kernel oil processing factory: case study on PT-X at Sumatera Utara Plantation

    Science.gov (United States)

    Haryanto, B.; Bukit, R. Br; Situmeang, E. M.; Christina, E. P.; Pandiangan, F.

    2018-02-01

    The purpose of this study was to determine the performance, productivity and feasibility of the operation of palm kernel processing plant based on Energy Productivity Ratio (EPR). EPR is expressed as the ratio of output to input energy and by-product. Palm Kernel plan is process in palm kernel to become palm kernel oil. The procedure started from collecting data needed as energy input such as: palm kernel prices, energy demand and depreciation of the factory. The energy output and its by-product comprise the whole production price such as: palm kernel oil price and the remaining products such as shells and pulp price. Calculation the equality of energy of palm kernel oil is to analyze the value of Energy Productivity Ratio (EPR) bases on processing capacity per year. The investigation has been done in Kernel Oil Processing Plant PT-X at Sumatera Utara plantation. The value of EPR was 1.54 (EPR > 1), which indicated that the processing of palm kernel into palm kernel oil is feasible to be operated based on the energy productivity.

  6. On diagonalization in map(M,G)

    International Nuclear Information System (INIS)

    Blau, M.; Thompson, G.

    1995-01-01

    Motivated by some questions in the path integral approach to (topological) gauge theories, we are led to address the following question: given a smooth map from a manifold M to a compact group G, is it possible to smoothly ''diagonalize'' it, i.e. conjugate it into a map to a maximal torus T of G? We analyze the local and global obstructions and give a complete solution to the problem for regular maps. We establish that these can always be smoothly diagonalized locally and that the obstructions to doing this globally are non-trivial Weyl group and torus bundles on M. We explain the relation of the obstructions to winding numbers of maps into G/T and restrictions of the structure group of a principal G bundle to T and examine the behaviour of gauge fields under this diagonalization. We also discuss the complications that arise in the presence of non-trivial G-bundles and for non-regular maps. We use these results to justify a Weyl integral formula for functional integrals which, as a novel feature not seen in the finite-dimensional case, contains a summation over all those topological T-sectors which arise as restrictions of a trivial principal G bundle and which was used previously to solve completely Yang-Mills theory and the G/ G model in two dimensions. (orig.)

  7. Low-energy moments of non-diagonal quark current correlators at four loops

    International Nuclear Information System (INIS)

    Maier, A.

    2015-06-01

    We complete the leading four physical terms in the low-energy expansions of heavy-light quark current correlators at four-loop order. As a by-product we reproduce the corresponding top-induced non-singlet correction to the electroweak ρ parameter.

  8. An Internal Data Non-hiding Type Real-time Kernel and its Application to the Mechatronics Controller

    Science.gov (United States)

    Yoshida, Toshio

    For the mechatronics equipment controller that controls robots and machine tools, high-speed motion control processing is essential. The software system of the controller like other embedded systems is composed of three layers software such as real-time kernel layer, middleware layer, and application software layer on the dedicated hardware. The application layer in the top layer is composed of many numbers of tasks, and application function of the system is realized by the cooperation between these tasks. In this paper we propose an internal data non-hiding type real-time kernel in which customizing the task control is possible only by change in the program code of the task side without any changes in the program code of real-time kernel. It is necessary to reduce the overhead caused by the real-time kernel task control for the speed-up of the motion control of the mechatronics equipment. For this, customizing the task control function is needed. We developed internal data non-cryptic type real-time kernel ZRK to evaluate this method, and applied to the control of the multi system automatic lathe. The effect of the speed-up of the task cooperation processing was able to be confirmed by combined task control processing on the task side program code using an internal data non-hiding type real-time kernel ZRK.

  9. Experimental evidence of off-diagonal transport term and the discrepancy between energy/particle balance and perturbation analyses

    International Nuclear Information System (INIS)

    Nagashima, Keisuke; Fukuda, Takeshi

    1991-12-01

    Evidence of temperature gradient driven particle flux was observed from the sawtooth induced density propagation phenomenon in JT-60. This off-diagonal particle flux was confirmed using the numerical calculation of measured chord integrated electron density. It was shown that the discrepancies between thermal and particle diffusivities estimated from the perturbation method and energy/particle balance analysis can be explained by considering the flux equations with off-diagonal transport terms. These flux equations were compared with the E x B convective fluxes in an electro-static drift wave instability and it was found that the E x B fluxes are consistent with several experimental observations. (author)

  10. Kernel bandwidth estimation for non-parametric density estimation: a comparative study

    CSIR Research Space (South Africa)

    Van der Walt, CM

    2013-12-01

    Full Text Available We investigate the performance of conventional bandwidth estimators for non-parametric kernel density estimation on a number of representative pattern-recognition tasks, to gain a better understanding of the behaviour of these estimators in high...

  11. Three-dimensional photodissociation in strong laser fields: Memory-kernel effective-mode expansion

    International Nuclear Information System (INIS)

    Li Xuan; Thanopulos, Ioannis; Shapiro, Moshe

    2011-01-01

    We introduce a method for the efficient computation of non-Markovian quantum dynamics for strong (and time-dependent) system-bath interactions. The past history of the system dynamics is incorporated by expanding the memory kernel in exponential functions thereby transforming in an exact fashion the non-Markovian integrodifferential equations into a (larger) set of ''effective modes'' differential equations (EMDE). We have devised a method which easily diagonalizes the EMDE, thereby allowing for the efficient construction of an adiabatic basis and the fast propagation of the EMDE in time. We have applied this method to three-dimensional photodissociation of the H 2 + molecule by strong laser fields. Our calculations properly include resonance-Raman scattering via the continuum, resulting in extensive rotational and vibrational excitations. The calculated final kinetic and angular distribution of the photofragments are in overall excellent agreement with experiments, both when transform-limited pulses and when chirped pulses are used.

  12. Proof of the formula for the ideal gas scattering kernel for nuclides with strongly energy dependent scattering cross sections

    International Nuclear Information System (INIS)

    Rothenstein, W.

    2004-01-01

    The current study is a sequel to a paper by Rothenstein and Dagan [Ann. Nucl. Energy 25 (1998) 209] where the ideal gas based kernel for scatterers with internal structure was introduced. This double differential kernel includes the neutron energy after scattering as well as the cosine of the scattering angle for isotopes with strong scattering resonances. A new mathematical formalism enables the inclusion of the new kernel in NJOY [MacFarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91 (LA-12740-m)]. Moreover the computational time of the new kernel is reduced significantly, feasible for practical application. The completeness of the new kernel is proven mathematically and demonstrated numerically. Modifications necessary to remove the existing inconsistency of the secondary energy distribution in NJOY are presented

  13. Virial expansion for almost diagonal random matrices

    Science.gov (United States)

    Yevtushenko, Oleg; Kravtsov, Vladimir E.

    2003-08-01

    Energy level statistics of Hermitian random matrices hat H with Gaussian independent random entries Higeqj is studied for a generic ensemble of almost diagonal random matrices with langle|Hii|2rangle ~ 1 and langle|Hi\

  14. Reaction kinetics aspect of U3O8 kernel with gas H2 on the characteristics of activation energy, reaction rate constant and O/U ratio of UO2 kernel

    International Nuclear Information System (INIS)

    Damunir

    2007-01-01

    The reaction kinetics aspect of U 3 O 8 kernel with gas H 2 on the characteristics of activation energy, reaction rate constant and O/U ratio of UO 2 kernel had been studied. U 3 O 8 kernel was reacted with gas H 2 in a reduction furnace at varied reaction time and temperature. The reaction temperature was varied at 600, 700, 750 and 850 °C with a pressure of 50 mmHg for 3 hours in gas N 2 atmosphere. The reation time was varied at 1, 2, 3 and 4 hours at a temperature of 750 °C using similar conditions. The reaction product was UO 2 kernel. The reaction kinetic aspect between U 3 O 8 and gas H 2 comprised the minimum activation energy (ΔE), the reaction rate constant and the O/U ratio of UO 2 kernel. The minimum activation energy was determined from a straight line slope of equation ln [{D b . R o {(1 - (1 - X b ) ⅓ } / (b.t.Cg)] = -3.9406 x 10 3 / T + 4.044. By multiplying with the straight line slope -3.9406 x 10 3 , the ideal gas constant (R) 1.985 cal/mol and the molarity difference of reaction coefficient 2, a minimum activation energy of 15.644 kcal/mol was obtained. The reaction rate constant was determined from first-order chemical reaction control and Arrhenius equation. The O/U ratio of UO 2 kernel was obtained using gravimetric method. The analysis result of reaction rate constant with chemical reaction control equation yielded reaction rate constants of 0.745 - 1.671 s -1 and the Arrhenius equation at temperatures of 650 - 850 °C yielded reaction rate constants of 0.637 - 2.914 s -1 . The O/U ratios of UO 2 kernel at the respective reaction rate constants were 2.013 - 2.014 and the O/U ratios at reaction time 1 - 4 hours were 2.04 - 2.011. The experiment results indicated that the minimum activation energy influenced the rate constant of first-order reaction and the O/U ratio of UO 2 kernel. The optimum condition was obtained at reaction rate constant of 1.43 s -1 , O/U ratio of UO 2 kernel of 2.01 at temperature of 750 °C and reaction time of 3

  15. Chaotic diagonal recurrent neural network

    International Nuclear Information System (INIS)

    Wang Xing-Yuan; Zhang Yi

    2012-01-01

    We propose a novel neural network based on a diagonal recurrent neural network and chaos, and its structure and learning algorithm are designed. The multilayer feedforward neural network, diagonal recurrent neural network, and chaotic diagonal recurrent neural network are used to approach the cubic symmetry map. The simulation results show that the approximation capability of the chaotic diagonal recurrent neural network is better than the other two neural networks. (interdisciplinary physics and related areas of science and technology)

  16. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  17. Learning molecular energies using localized graph kernels

    Science.gov (United States)

    Ferré, Grégoire; Haut, Terry; Barros, Kipton

    2017-03-01

    Recent machine learning methods make it possible to model potential energy of atomic configurations with chemical-level accuracy (as calculated from ab initio calculations) and at speeds suitable for molecular dynamics simulation. Best performance is achieved when the known physical constraints are encoded in the machine learning models. For example, the atomic energy is invariant under global translations and rotations; it is also invariant to permutations of same-species atoms. Although simple to state, these symmetries are complicated to encode into machine learning algorithms. In this paper, we present a machine learning approach based on graph theory that naturally incorporates translation, rotation, and permutation symmetries. Specifically, we use a random walk graph kernel to measure the similarity of two adjacency matrices, each of which represents a local atomic environment. This Graph Approximated Energy (GRAPE) approach is flexible and admits many possible extensions. We benchmark a simple version of GRAPE by predicting atomization energies on a standard dataset of organic molecules.

  18. Non-LTE radiative transfer with lambda-acceleration - Convergence properties using exact full and diagonal lambda-operators

    Science.gov (United States)

    Macfarlane, J. J.

    1992-01-01

    We investigate the convergence properties of Lambda-acceleration methods for non-LTE radiative transfer problems in planar and spherical geometry. Matrix elements of the 'exact' A-operator are used to accelerate convergence to a solution in which both the radiative transfer and atomic rate equations are simultaneously satisfied. Convergence properties of two-level and multilevel atomic systems are investigated for methods using: (1) the complete Lambda-operator, and (2) the diagonal of the Lambda-operator. We find that the convergence properties for the method utilizing the complete Lambda-operator are significantly better than those of the diagonal Lambda-operator method, often reducing the number of iterations needed for convergence by a factor of between two and seven. However, the overall computational time required for large scale calculations - that is, those with many atomic levels and spatial zones - is typically a factor of a few larger for the complete Lambda-operator method, suggesting that the approach should be best applied to problems in which convergence is especially difficult.

  19. A progressive diagonalization scheme for the Rabi Hamiltonian

    International Nuclear Information System (INIS)

    Pan, Feng; Guan, Xin; Wang, Yin; Draayer, J P

    2010-01-01

    A diagonalization scheme for the Rabi Hamiltonian, which describes a qubit interacting with a single-mode radiation field via a dipole interaction, is proposed. It is shown that the Rabi Hamiltonian can be solved almost exactly using a progressive scheme that involves a finite set of one variable polynomial equations. The scheme is especially efficient for the lower part of the spectrum. Some low-lying energy levels of the model with several sets of parameters are calculated and compared to those provided by the recently proposed generalized rotating-wave approximation and a full matrix diagonalization.

  20. Domain wall partition function of the eight-vertex model with a non-diagonal reflecting end

    International Nuclear Information System (INIS)

    Yang Wenli; Chen Xi; Feng Jun; Hao Kun; Shi Kangjie; Sun Chengyi; Yang Zhanying; Zhang Yaozhong

    2011-01-01

    With the help of the Drinfeld twist or factorizing F-matrix for the eight-vertex SOS model, we derive the recursion relations of the partition function for the eight-vertex model with a generic non-diagonal reflecting end and domain wall boundary condition. Solving the recursion relations, we obtain the explicit determinant expression of the partition function. Our result shows that, contrary to the eight-vertex model without a reflecting end, the partition function can be expressed as a single determinant.

  1. Considering a non-polynomial basis for local kernel regression problem

    Science.gov (United States)

    Silalahi, Divo Dharma; Midi, Habshah

    2017-01-01

    A common used as solution for local kernel nonparametric regression problem is given using polynomial regression. In this study, we demonstrated the estimator and properties using maximum likelihood estimator for a non-polynomial basis such B-spline to replacing the polynomial basis. This estimator allows for flexibility in the selection of a bandwidth and a knot. The best estimator was selected by finding an optimal bandwidth and knot through minimizing the famous generalized validation function.

  2. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  3. Diagonalization of Hamiltonian; Diagonalization of Hamiltonian

    Energy Technology Data Exchange (ETDEWEB)

    Garrido, L M; Pascual, P

    1960-07-01

    We present a general method to diagonalized the Hamiltonian of particles of arbitrary spin. In particular we study the cases of spin 0,1/2, 1 and see that for spin 1/2 our transformation agrees with Foldy's and obtain the expression for different observables for particles of spin C and 1 in the new representation. (Author) 7 refs.

  4. Towards smart energy systems: application of kernel machine regression for medium term electricity load forecasting.

    Science.gov (United States)

    Alamaniotis, Miltiadis; Bargiotas, Dimitrios; Tsoukalas, Lefteri H

    2016-01-01

    Integration of energy systems with information technologies has facilitated the realization of smart energy systems that utilize information to optimize system operation. To that end, crucial in optimizing energy system operation is the accurate, ahead-of-time forecasting of load demand. In particular, load forecasting allows planning of system expansion, and decision making for enhancing system safety and reliability. In this paper, the application of two types of kernel machines for medium term load forecasting (MTLF) is presented and their performance is recorded based on a set of historical electricity load demand data. The two kernel machine models and more specifically Gaussian process regression (GPR) and relevance vector regression (RVR) are utilized for making predictions over future load demand. Both models, i.e., GPR and RVR, are equipped with a Gaussian kernel and are tested on daily predictions for a 30-day-ahead horizon taken from the New England Area. Furthermore, their performance is compared to the ARMA(2,2) model with respect to mean average percentage error and squared correlation coefficient. Results demonstrate the superiority of RVR over the other forecasting models in performing MTLF.

  5. Benchmarking GW against exact diagonalization for semiempirical models

    DEFF Research Database (Denmark)

    Kaasbjerg, Kristen; Thygesen, Kristian Sommer

    2010-01-01

    We calculate ground-state total energies and single-particle excitation energies of seven pi-conjugated molecules described with the semiempirical Pariser-Parr-Pople model using self-consistent many-body perturbation theory at the GW level and exact diagonalization. For the total energies GW capt...... (Hubbard models) where correlation effects dominate over screening/relaxation effects. Finally we illustrate the important role of the derivative discontinuity of the true exchange-correlation functional by computing the exact Kohn-Sham levels of benzene....

  6. Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies

    Energy Technology Data Exchange (ETDEWEB)

    Groh, Kai

    2012-10-15

    The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement

  7. Quantum Einstein gravity. Advancements of heat kernel-based renormalization group studies

    International Nuclear Information System (INIS)

    Groh, Kai

    2012-10-01

    The asymptotic safety scenario allows to define a consistent theory of quantized gravity within the framework of quantum field theory. The central conjecture of this scenario is the existence of a non-Gaussian fixed point of the theory's renormalization group flow, that allows to formulate renormalization conditions that render the theory fully predictive. Investigations of this possibility use an exact functional renormalization group equation as a primary non-perturbative tool. This equation implements Wilsonian renormalization group transformations, and is demonstrated to represent a reformulation of the functional integral approach to quantum field theory. As its main result, this thesis develops an algebraic algorithm which allows to systematically construct the renormalization group flow of gauge theories as well as gravity in arbitrary expansion schemes. In particular, it uses off-diagonal heat kernel techniques to efficiently handle the non-minimal differential operators which appear due to gauge symmetries. The central virtue of the algorithm is that no additional simplifications need to be employed, opening the possibility for more systematic investigations of the emergence of non-perturbative phenomena. As a by-product several novel results on the heat kernel expansion of the Laplace operator acting on general gauge bundles are obtained. The constructed algorithm is used to re-derive the renormalization group flow of gravity in the Einstein-Hilbert truncation, showing the manifest background independence of the results. The well-studied Einstein-Hilbert case is further advanced by taking the effect of a running ghost field renormalization on the gravitational coupling constants into account. A detailed numerical analysis reveals a further stabilization of the found non-Gaussian fixed point. Finally, the proposed algorithm is applied to the case of higher derivative gravity including all curvature squared interactions. This establishes an improvement of

  8. On Higgs-exchange DIS, physical evolution kernels and fourth-order splitting functions at large x

    International Nuclear Information System (INIS)

    Soar, G.; Vogt, A.; Vermaseren, J.A.M.

    2009-12-01

    We present the coefficient functions for deep-inelastic scattering (DIS) via the exchange of a scalar φ directly coupling only to gluons, such as the Higgs boson in the limit of a very heavy top quark and n f effectively massless light flavours, to the third order in perturbative QCD. The two-loop results are employed to construct the next-to-next-to-leading order physical evolution kernels for the system (F 2 ,F φ ) of flavour-singlet structure functions. The practical relevance of these kernels as an alternative to MS factorization is bedevilled by artificial double logarithms at small values of the scaling variable x, where the large top-mass limit ceases to be appropriate. However, they show an only single-logarithmic enhancement at large x. Conjecturing that this feature persists to the next order also in the present singlet case, the three-loop coefficient functions facilitate exact predictions (backed up by their particular colour structure) of the double-logarithmic contributions to the fourth-order singlet splitting functions, i.e., of the terms (1-x) a ln k (1-x) with k=4,5,6 and k=3,4,5, respectively, for the off-diagonal and diagonal quantities to all powers a in (1-x). (orig.)

  9. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  10. Self-consistent cluster theories for alloys with diagonal and off-diagonal disorder

    International Nuclear Information System (INIS)

    Gonis, A.; Garland, J.W.

    1978-01-01

    The molecular coherent-potential approximation (MCPA) and other, simpler cluster approximations for disordered alloys are studied both analytically and numerically for alloys with diagonal and off-diagonal disorder (ODD). First, the MCPA for alloys with only diagonal disorder is rederived within the interactor formalism of Blackman, Esterling, and Berk. This formalism, which simplifies the numerical implementation of the MCPA, is then used to generalize the MCPA so as to take account of ODD. It is shown that the analytic properties of the MCPA are preserved under this generalization. Also, two computationally simple cluster approximations, the self-consistent central-site approximation (SCCSA) and the self-consistent boundary-site approximation (SCBSA), are generalized to include the effects of ODD. It is shown that for one-dimensional systems with only nearest-neighbor hopping the SCBSA yields Green's functions which are identical to those given by the MCPA and thus are analytic, even in the presence of ODD. Finally, the results of numerical calculations are reported for one-dimensional systems with only nearest-neighbor hopping but with both diagonal and off-diagonal disorder. These calculations were performed using the single-site approximation of Blackman, Esterling, and Berk and three different cluster approximations: the multishell method previously proposed by the authors, the SCCSA, and the SCBSA. The results of these calculations are compared with exact results and with previous results obtained using the truncated t-matix approximation and the recent method of Kaplan and Gray. These comparisons suggest that the multishell method and the generalization of the SCBSA given in this paper are more efficient and accurate for the calculation of densities of states for systems with ODD. On the other hand, as expected, the SCCSA was found to yield severely nonanalytic results for the values of band parameters used

  11. Nondestructive identification of the Bell diagonal state

    International Nuclear Information System (INIS)

    Jin Jiasen; Yu Changshui; Song Heshan

    2011-01-01

    We propose a scheme for identifying an unknown Bell diagonal state. In our scheme the measurements are performed on the probe qubits instead of the Bell diagonal state. The distinct advantage is that the quantum state of the evolved Bell diagonal state ensemble plus probe states will still collapse on the original Bell diagonal state ensemble after the measurement on probe states; i.e., our identification is quantum state nondestructive. How to realize our scheme in the framework of cavity electrodynamics is also shown.

  12. Simultaneous diagonal and off-diagonal order in the Bose-Hubbard Hamiltonian

    International Nuclear Information System (INIS)

    Scalettar, R.T.; Batrouni, G.G.; Kampf, A.P.; Zimanyi, G.T.

    1995-01-01

    The Bose-Hubbard model exhibits a rich phase diagram consisting both of insulating regimes where diagonal long-range (solid) order dominates as well as conducting regimes where off-diagonal long-range order (superfluidity) is present. In this paper we describe the results of quantum Monte Carlo calculations of the phase diagram, both for the hard- and soft-core cases, with a particular focus on the possibility of simultaneous superfluid and solid order. We also discuss the appearance of phase separation in the model. The simulations are compared with analytic calculations of the phase diagram and spin-wave dispersion

  13. Simulating non-Newtonian flows with the moving particle semi-implicit method with an SPH kernel

    International Nuclear Information System (INIS)

    Xiang, Hao; Chen, Bin

    2015-01-01

    The moving particle semi-implicit (MPS) method and smoothed particle hydrodynamics (SPH) are commonly used mesh-free particle methods for free surface flows. The MPS method has superiority in incompressible flow simulation and simple programing. However, the crude kernel function is not accurate enough for the discretization of the divergence of the shear stress tensor by the particle inconsistency when the MPS method is extended to non-Newtonian flows. This paper presents an improved MPS method with an SPH kernel to simulate non-Newtonian flows. To improve the consistency of the partial derivative, the SPH cubic spline kernel and the Taylor series expansion are combined with the MPS method. This approach is suitable for all non-Newtonian fluids that can be described with τ  = μ(|γ|) Δ (where τ is the shear stress tensor, μ is the viscosity, |γ| is the shear rate, and Δ is the strain tensor), e.g., the Casson and Cross fluids. Two examples are simulated including the Newtonian Poiseuille flow and container filling process of the Cross fluid. The results of Poiseuille flow are more accurate than the traditional MPS method, and different filling processes are obtained with good agreement with previous results, which verified the validation of the new algorithm. For the Cross fluid, the jet fracture length can be correlated with We 0.28 Fr 0.78 (We is the Weber number, Fr is the Froude number). (paper)

  14. New numerical approximation of fractional derivative with non-local and non-singular kernel: Application to chaotic models

    Science.gov (United States)

    Toufik, Mekkaoui; Atangana, Abdon

    2017-10-01

    Recently a new concept of fractional differentiation with non-local and non-singular kernel was introduced in order to extend the limitations of the conventional Riemann-Liouville and Caputo fractional derivatives. A new numerical scheme has been developed, in this paper, for the newly established fractional differentiation. We present in general the error analysis. The new numerical scheme was applied to solve linear and non-linear fractional differential equations. We do not need a predictor-corrector to have an efficient algorithm, in this method. The comparison of approximate and exact solutions leaves no doubt believing that, the new numerical scheme is very efficient and converges toward exact solution very rapidly.

  15. Off-diagonal mass generation for Yang-Mills theories in the maximal Abelian gauge

    International Nuclear Information System (INIS)

    Dudal, D.; Verschelde, H.; Sarandy, M.S.

    2007-01-01

    We investigate a dynamical mass generation mechanism for the off-diagonal gluons and ghosts in SU(N) Yang-Mills theories, quantized in the maximal Abelian gauge. Such a mass can be seen as evidence for the Abelian dominance in that gauge. It originates from the condensation of a mixed gluon-ghost operator of mass dimension two, which lowers the vacuum energy. We construct an effective potential for this operator by a combined use of the local composite operators technique with algebraic renormalization and we discuss the gauge parameter independence of the results. We also show that it is possible to connect the vacuum energy, due to the mass dimension two condensate discussed here, with the non-trivial vacuum energy originating from the condensate 2 μ >, which has attracted much attention in the Landau gauge. (author)

  16. On flame kernel formation and propagation in premixed gases

    Energy Technology Data Exchange (ETDEWEB)

    Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  17. Diagonalization of complex symmetric matrices: Generalized Householder reflections, iterative deflation and implicit shifts

    Science.gov (United States)

    Noble, J. H.; Lubasch, M.; Stevens, J.; Jentschura, U. D.

    2017-12-01

    We describe a matrix diagonalization algorithm for complex symmetric (not Hermitian) matrices, A ̲ =A̲T, which is based on a two-step algorithm involving generalized Householder reflections based on the indefinite inner product 〈 u ̲ , v ̲ 〉 ∗ =∑iuivi. This inner product is linear in both arguments and avoids complex conjugation. The complex symmetric input matrix is transformed to tridiagonal form using generalized Householder transformations (first step). An iterative, generalized QL decomposition of the tridiagonal matrix employing an implicit shift converges toward diagonal form (second step). The QL algorithm employs iterative deflation techniques when a machine-precision zero is encountered "prematurely" on the super-/sub-diagonal. The algorithm allows for a reliable and computationally efficient computation of resonance and antiresonance energies which emerge from complex-scaled Hamiltonians, and for the numerical determination of the real energy eigenvalues of pseudo-Hermitian and PT-symmetric Hamilton matrices. Numerical reference values are provided.

  18. Non-negative Feynman endash Kac kernels in Schroedinger close-quote s interpolation problem

    International Nuclear Information System (INIS)

    Blanchard, P.; Garbaczewski, P.; Olkiewicz, R.

    1997-01-01

    The local formulations of the Markovian interpolating dynamics, which is constrained by the prescribed input-output statistics data, usually utilize strictly positive Feynman endash Kac kernels. This implies that the related Markov diffusion processes admit vanishing probability densities only at the boundaries of the spatial volume confining the process. We discuss an extension of the framework to encompass singular potentials and associated non-negative Feynman endash Kac-type kernels. It allows us to deal with a class of continuous interpolations admitted by general non-negative solutions of the Schroedinger boundary data problem. The resulting nonstationary stochastic processes are capable of both developing and destroying nodes (zeros) of probability densities in the course of their evolution, also away from the spatial boundaries. This observation conforms with the general mathematical theory (due to M. Nagasawa and R. Aebi) that is based on the notion of multiplicative functionals, extending in turn the well known Doob close-quote s h-transformation technique. In view of emphasizing the role of the theory of non-negative solutions of parabolic partial differential equations and the link with open-quotes Wiener exclusionclose quotes techniques used to evaluate certain Wiener functionals, we give an alternative insight into the issue, that opens a transparent route towards applications.copyright 1997 American Institute of Physics

  19. Quantum theory with an energy operator defined as a quartic form of the momentum

    Energy Technology Data Exchange (ETDEWEB)

    Bezák, Viktor, E-mail: bezak@fmph.uniba.sk

    2016-09-15

    Quantum theory of the non-harmonic oscillator defined by the energy operator proposed by Yurke and Buks (2006) is presented. Although these authors considered a specific problem related to a model of transmission lines in a Kerr medium, our ambition is not to discuss the physical substantiation of their model. Instead, we consider the problem from an abstract, logically deductive, viewpoint. Using the Yurke–Buks energy operator, we focus attention on the imaginary-time propagator. We derive it as a functional of the Mehler kernel and, alternatively, as an exact series involving Hermite polynomials. For a statistical ensemble of identical oscillators defined by the Yurke–Buks energy operator, we calculate the partition function, average energy, free energy and entropy. Using the diagonal element of the canonical density matrix of this ensemble in the coordinate representation, we define a probability density, which appears to be a deformed Gaussian distribution. A peculiarity of this probability density is that it may reveal, when plotted as a function of the position variable, a shape with two peaks located symmetrically with respect to the central point.

  20. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  1. Diagonal Cracking and Shear Strength of Reinforced Concrete Beams

    DEFF Research Database (Denmark)

    Zhang, Jin-Ping

    1997-01-01

    The shear failure of non-shear-reinforced concrete beams with normal shear span ratios is observed to be governed in general by the formation of a critical diagonal crack. Under the hypothesis that the cracking of concrete introduces potential yield lines which may be more dangerous than the ones...

  2. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  3. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  4. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  5. Thermodynamics of Rh nuclear spins calculated by exact diagonalization

    DEFF Research Database (Denmark)

    Lefmann, K.; Ipsen, J.; Rasmussen, F.B.

    2000-01-01

    We have employed the method of exact diagonalization to obtain the full-energy spectrum of a cluster of 16 Rh nuclear spins, having dipolar and RK interactions between first and second nearest neighbours only. We have used this to calculate the nuclear spin entropy, and our results at both positi...

  6. Functional Representation for the Born-Oppenheimer Diagonal Correction and Born-Huang Adiabatic Potential Energy Surfaces for Isotopomers of H3

    International Nuclear Information System (INIS)

    Mielke, Steven L.; Schwenke, David; Schatz, George C.; Garrett, Bruce C.; Peterson, Kirk A.

    2009-01-01

    Multireference configuration interaction (MRCI) calculations of the Born-Oppenheimer diagonal correction (BODC) for H3 were performed at 1397 symmetry-unique configurations using the Born-Huang approach; isotopic substitution leads to 4041 symmetry-unique configurations for the DH2 mass combination. These results were then fit to a functional form that permits calculation of the BODC for any combination of isotopes. Mean unsigned fitting errors on a test grid of configurations not included in the fitting process were 0.14, 0.12, and 0.65 cm-1 for the H3, DH2, and MuH2 isotopomers, respectively. This representation can be combined with any Born-Oppenheimer potential energy surface (PES) to yield Born-Huang (BH) PESs; herein we choose the CCI potential energy surface, the uncertainties of which (∼0.01 kcal/mol) are much smaller than the magnitude of the BODC. FORTRAN routines to evaluate these BH surfaces are provided. Variational transition state theory calculations are presented comparing thermal rate constants for reactions on the BO and BH surfaces to provide an initial estimate of the significance of the diagonal correction for the dynamics.

  7. Diagonal Arguments

    Czech Academy of Sciences Publication Activity Database

    Peregrin, Jaroslav

    -, č. 2 (2017), s. 33-43 ISSN 0567-8293 R&D Projects: GA ČR(CZ) GA17-15645S Institutional support: RVO:67985955 Keywords : diagonalization * cardinality * Russell’s paradox * incompleteness of arithmetic Subject RIV: AA - Philosophy ; Religion OBOR OECD: Philosophy, History and Philosophy of science and technology

  8. Constructing quantum dynamics from mixed quantum-classical descriptions

    International Nuclear Information System (INIS)

    Barsegov, V.; Rossky, P.J.

    2004-01-01

    The influence of quantum bath effects on the dynamics of a quantum two-level system linearly coupled to a harmonic bath is studied when the coupling is both diagonal and off-diagonal. It is shown that the pure dephasing kernel and the non-adiabatic quantum transition rate between Born-Oppenheimer states of the subsystem can be decomposed into a contribution from thermally excited bath modes plus a zero point energy contribution. This quantum rate can be modewise factorized exactly into a product of a mixed quantum subsystem-classical bath transition rate and a quantum correction factor. This factor determines dynamics of quantum bath correlations. Quantum bath corrections to both the transition rate and the pure dephasing kernel are shown to be readily evaluated via a mixed quantum-classical simulation. Hence, quantum dynamics can be recovered from a mixed quantum-classical counterpart by incorporating the missing quantum bath corrections. Within a mixed quantum-classical framework, a simple approach for evaluating quantum bath corrections in calculation of the non-adiabatic transition rate is presented

  9. Off-Diagonal Geometric Phase in a Neutron Interferometer Experiment

    International Nuclear Information System (INIS)

    Hasegawa, Y.; Loidl, R.; Baron, M.; Badurek, G.; Rauch, H.

    2001-01-01

    Off-diagonal geometric phases acquired by an evolution of a 1/2 -spin system have been observed by means of a polarized neutron interferometer. We have successfully measured the off-diagonal phase for noncyclic evolutions even when the diagonal geometric phase is undefined. Our data confirm theoretical predictions and the results illustrate the significance of the off-diagonal phase

  10. The modified Gauss diagonalization of polynomial matrices

    International Nuclear Information System (INIS)

    Saeed, K.

    1982-10-01

    The Gauss algorithm for diagonalization of constant matrices is modified for application to polynomial matrices. Due to this modification the diagonal elements become pure polynomials rather than rational functions. (author)

  11. Direct current hopping conductance in one-dimensional diagonal disordered systems

    Institute of Scientific and Technical Information of China (English)

    Ma Song-Shan; Xu Hui; Liu Xiao-Liang; Xiao Jian-Rong

    2006-01-01

    Based on a tight-binding disordered model describing a single electron band, we establish a direct current (dc) electronic hopping transport conductance model of one-dimensional diagonal disordered systems, and also derive a dc conductance formula. By calculating the dc conductivity, the relationships between electric field and conductivity and between temperature and conductivity are analysed, and the role played by the degree of disorder in electronic transport is studied. The results indicate the conductivity of systems decreasing with the increase of the degree of disorder, characteristics of negative differential dependence of resistance on temperature at low temperatures in diagonal disordered systems, and the conductivity of systems decreasing with the increase of electric field, featuring the non-Ohm's law conductivity.

  12. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  13. Proceedings – Mathematical Sciences | Indian Academy of Sciences

    Indian Academy of Sciences (India)

    We find an explicit function approximating at high energies the kernel of the scattering matrix with arbitrary accuracy. Moreover, the same function gives all diagonal singularities of the kernel of the scattering matrix in the angular variables. Author Affiliations. D Yafaev1. Department of Mathematics, University Rennes-1, ...

  14. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    International Nuclear Information System (INIS)

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  15. Algebraic techniques for diagonalization of a split quaternion matrix in split quaternionic mechanics

    International Nuclear Information System (INIS)

    Jiang, Tongsong; Jiang, Ziwu; Zhang, Zhaozhong

    2015-01-01

    In the study of the relation between complexified classical and non-Hermitian quantum mechanics, physicists found that there are links to quaternionic and split quaternionic mechanics, and this leads to the possibility of employing algebraic techniques of split quaternions to tackle some problems in complexified classical and quantum mechanics. This paper, by means of real representation of a split quaternion matrix, studies the problem of diagonalization of a split quaternion matrix and gives algebraic techniques for diagonalization of split quaternion matrices in split quaternionic mechanics

  16. Quasi-Dual-Packed-Kerneled Au49 (2,4-DMBT)27 Nanoclusters and the Influence of Kernel Packing on the Electrochemical Gap.

    Science.gov (United States)

    Liao, Lingwen; Zhuang, Shengli; Wang, Pu; Xu, Yanan; Yan, Nan; Dong, Hongwei; Wang, Chengming; Zhao, Yan; Xia, Nan; Li, Jin; Deng, Haiteng; Pei, Yong; Tian, Shi-Kai; Wu, Zhikun

    2017-10-02

    Although face-centered cubic (fcc), body-centered cubic (bcc), hexagonal close-packed (hcp), and other structured gold nanoclusters have been reported, it was unclear whether gold nanoclusters with mix-packed (fcc and non-fcc) kernels exist, and the correlation between kernel packing and the properties of gold nanoclusters is unknown. A Au 49 (2,4-DMBT) 27 nanocluster with a shell electron count of 22 has now been been synthesized and structurally resolved by single-crystal X-ray crystallography, which revealed that Au 49 (2,4-DMBT) 27 contains a unique Au 34 kernel consisting of one quasi-fcc-structured Au 21 and one non-fcc-structured Au 13 unit (where 2,4-DMBTH=2,4-dimethylbenzenethiol). Further experiments revealed that the kernel packing greatly influences the electrochemical gap (EG) and the fcc structure has a larger EG than the investigated non-fcc structure. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  18. Spectral properties and scaling relations in off diagonally disordered chains

    International Nuclear Information System (INIS)

    Ure, J.E.; Majlis, N.

    1987-07-01

    We obtain the localization length L as a function of the energy E and the disorder width W for an off-diagonally disordered chain. This is done performing numerical simulations involving the continued fraction representations of the transfer matrix. The scaling relation L=W s is obtained with values of the exponent s in agreement with calculations of other authors. We also obtain the relation L ∼ |E| v for E → 0, and use it in the Herbert-Spencer-Thouless formula for L to describe the singularity of the density of states near E=0. We show that the slightest diagonal disorder obliterates this singularity. A practical method is presented to calculate the Green function by exploiting its continued fraction expansion. (author). 20 refs, 4 figs

  19. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  20. Numerical study of the ignition behavior of a post-discharge kernel injected into a turbulent stratified cross-flow

    Science.gov (United States)

    Jaravel, Thomas; Labahn, Jeffrey; Ihme, Matthias

    2017-11-01

    The reliable initiation of flame ignition by high-energy spark kernels is critical for the operability of aviation gas turbines. The evolution of a spark kernel ejected by an igniter into a turbulent stratified environment is investigated using detailed numerical simulations with complex chemistry. At early times post ejection, comparisons of simulation results with high-speed Schlieren data show that the initial trajectory of the kernel is well reproduced, with a significant amount of air entrainment from the surrounding flow that is induced by the kernel ejection. After transiting in a non-flammable mixture, the kernel reaches a second stream of flammable methane-air mixture, where the successful of the kernel ignition was found to depend on the local flow state and operating conditions. By performing parametric studies, the probability of kernel ignition was identified, and compared with experimental observations. The ignition behavior is characterized by analyzing the local chemical structure, and its stochastic variability is also investigated.

  1. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  2. Free energy on a cycle graph and trigonometric deformation of heat kernel traces on odd spheres

    Science.gov (United States)

    Kan, Nahomi; Shiraishi, Kiyoshi

    2018-01-01

    We consider a possible ‘deformation’ of the trace of the heat kernel on odd dimensional spheres, motivated by the calculation of the free energy of a scalar field on a discretized circle. By using an expansion in terms of the modified Bessel functions, we obtain the values of the free energies after a suitable regularization.

  3. Photon beam convolution using polyenergetic energy deposition kernels

    International Nuclear Information System (INIS)

    Hoban, P.W.; Murray, D.C.; Round, W.H.

    1994-01-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, μ, to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio μ ab /μ as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author)

  4. Measurement of off-diagonal transport coefficients in two-phase flow in porous media.

    Science.gov (United States)

    Ramakrishnan, T S; Goode, P A

    2015-07-01

    The prevalent description of low capillary number two-phase flow in porous media relies on the independence of phase transport. An extended Darcy's law with a saturation dependent effective permeability is used for each phase. The driving force for each phase is given by its pressure gradient and the body force. This diagonally dominant form neglects momentum transfer from one phase to the other. Numerical and analytical modeling in regular geometries have however shown that while this approximation is simple and acceptable in some cases, many practical problems require inclusion of momentum transfer across the interface. Its inclusion leads to a generalized form of extended Darcy's law in which both the diagonal relative permeabilities and the off-diagonal terms depend not only on saturation but also on the viscosity ratio. Analogous to application of thermodynamics to dynamical systems, any of the extended forms of Darcy's law assumes quasi-static interfaces of fluids for describing displacement problems. Despite the importance of the permeability coefficients in oil recovery, soil moisture transport, contaminant removal, etc., direct measurements to infer the magnitude of the off-diagonal coefficients have been lacking. The published data based on cocurrent and countercurrent displacement experiments are necessarily indirect. In this paper, we propose a null experiment to measure the off-diagonal term directly. For a given non-wetting phase pressure-gradient, the null method is based on measuring a counter pressure drop in the wetting phase required to maintain a zero flux. The ratio of the off-diagonal coefficient to the wetting phase diagonal coefficient (relative permeability) may then be determined. The apparatus is described in detail, along with the results obtained. We demonstrate the validity of the experimental results and conclude the paper by comparing experimental data to numerical simulation. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Non-linear modeling of 1H NMR metabonomic data using kernel-based orthogonal projections to latent structures optimized by simulated annealing

    International Nuclear Information System (INIS)

    Fonville, Judith M.; Bylesjoe, Max; Coen, Muireann; Nicholson, Jeremy K.; Holmes, Elaine; Lindon, John C.; Rantalainen, Mattias

    2011-01-01

    Highlights: → Non-linear modeling of metabonomic data using K-OPLS. → automated optimization of the kernel parameter by simulated annealing. → K-OPLS provides improved prediction performance for exemplar spectral data sets. → software implementation available for R and Matlab under GPL v2 license. - Abstract: Linear multivariate projection methods are frequently applied for predictive modeling of spectroscopic data in metabonomic studies. The OPLS method is a commonly used computational procedure for characterizing spectral metabonomic data, largely due to its favorable model interpretation properties providing separate descriptions of predictive variation and response-orthogonal structured noise. However, when the relationship between descriptor variables and the response is non-linear, conventional linear models will perform sub-optimally. In this study we have evaluated to what extent a non-linear model, kernel-based orthogonal projections to latent structures (K-OPLS), can provide enhanced predictive performance compared to the linear OPLS model. Just like its linear counterpart, K-OPLS provides separate model components for predictive variation and response-orthogonal structured noise. The improved model interpretation by this separate modeling is a property unique to K-OPLS in comparison to other kernel-based models. Simulated annealing (SA) was used for effective and automated optimization of the kernel-function parameter in K-OPLS (SA-K-OPLS). Our results reveal that the non-linear K-OPLS model provides improved prediction performance in three separate metabonomic data sets compared to the linear OPLS model. We also demonstrate how response-orthogonal K-OPLS components provide valuable biological interpretation of model and data. The metabonomic data sets were acquired using proton Nuclear Magnetic Resonance (NMR) spectroscopy, and include a study of the liver toxin galactosamine, a study of the nephrotoxin mercuric chloride and a study of

  6. Vaidya spacetime in the diagonal coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Berezin, V. A., E-mail: berezin@inr.ac.ru; Dokuchaev, V. I., E-mail: dokuchaev@inr.ac.ru; Eroshenko, Yu. N., E-mail: eroshenko@inr.ac.ru [Russian Academy of Sciences, Institute for Nuclear Research (Russian Federation)

    2017-03-15

    We have analyzed the transformation from initial coordinates (v, r) of the Vaidya metric with light coordinate v to the most physical diagonal coordinates (t, r). An exact solution has been obtained for the corresponding metric tensor in the case of a linear dependence of the mass function of the Vaidya metric on light coordinate v. In the diagonal coordinates, a narrow region (with a width proportional to the mass growth rate of a black hole) has been detected near the visibility horizon of the Vaidya accreting black hole, in which the metric differs qualitatively from the Schwarzschild metric and cannot be represented as a small perturbation. It has been shown that, in this case, a single set of diagonal coordinates (t, r) is insufficient to cover the entire range of initial coordinates (v, r) outside the visibility horizon; at least three sets of diagonal coordinates are required, the domains of which are separated by singular surfaces on which the metric components have singularities (either g{sub 00} = 0 or g{sub 00} = ∞). The energy–momentum tensor diverges on these surfaces; however, the tidal forces turn out to be finite, which follows from an analysis of the deviation equations for geodesics. Therefore, these singular surfaces are exclusively coordinate singularities that can be referred to as false fire-walls because there are no physical singularities on them. We have also considered the transformation from the initial coordinates to other diagonal coordinates (η, y), in which the solution is obtained in explicit form, and there is no energy–momentum tensor divergence.

  7. Proof and implementation of the stochastic formula for ideal gas, energy dependent scattering kernel

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Lohnert, G.

    2009-01-01

    The ideal gas, scattering kernel for heavy nuclei with pronounced resonances was developed [Rothenstein, W., Dagan, R., 1998. Ann. Nucl. Energy 25, 209-222], proved and implemented [Rothenstein, W., 2004 Ann. Nucl. Energy 31, 9-23] in the data processing code NJOY [Macfarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91, LA-12740-M] from which the scattering probability tables were prepared [Dagan, R., 2005. Ann. Nucl. Energy 32, 367-377]. Those tables were introduced to the well known MCNP code [X-5 Monte Carlo Team. MCNP - A General Monte Carlo N-Particle Transport Code version 5 LA-UR-03-1987 code] via the 'mt' input cards in the same manner as it is done for light nuclei in the thermal energy range. In this study we present an alternative methodology for solving the double differential energy dependent scattering kernel which is based solely on stochastic consideration as far as the scattering probabilities are concerned. The solution scheme is based on an alternative rejection scheme suggested by Rothenstein [Rothenstein, W. ENS conference 1994 Tel Aviv]. Based on comparison with the above mentioned analytical (probability S(α,β)-tables) approach it is confirmed that the suggested rejection scheme provides accurate results. The uncertainty concerning the magnitude of the bias due to the enhanced multiple rejections during the sampling procedure are proved to lie within 1-2 standard deviations for all practical cases that were analysed.

  8. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  9. Trends and Effective Use of Energy Input in the Palm Kernel Oil Mills

    Directory of Open Access Journals (Sweden)

    Bamgboye, AI.

    2007-01-01

    Full Text Available This work aims at studying the importance and the efficiency of energy use in a few palm kernel oil mills selected for their representativity. Pattern of energy use, the cost of energy per unit product, energy intensity and normalized performance indicator (NPI were determined. Results show that the medium and the large mills depend largely on fossil fuel; while the small mill depends on electricity. It was found out that the large mill has the most effective use of energy with high energy intensity. The annual cost of energy per unit product of N8,360,000 ($64,307.69; N12,262,250 ($94,325 and N13,353,870 ($102, 722.08 were obtained for small, medium and large mills respectively. The NPI results show that there was no wastage of energy through space heating in energy supplied for production within the factory site.

  10. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  11. Diagonal Born-Oppenheimer correction for coupled-cluster wave-functions

    Science.gov (United States)

    Shamasundar, K. R.

    2018-06-01

    We examine how geometry-dependent normalisation freedom of electronic wave-functions affects extraction of a meaningful diagonal Born-Oppenheimer correction (DBOC) to the ground-state Born-Oppenheimer potential energy surface (PES). By viewing this freedom as a kind of gauge-freedom, it is shown that DBOC and the resulting associated mass-dependent adiabatic PES are gauge-invariant quantities. A sum-over-states (SOS) formula for DBOC which explicitly exhibits this invariance is derived. A biorthogonal formulation suitable for DBOC computations using standard unnormalised coupled-cluster (CC) wave-functions is presented. This is shown to lead to a biorthogonal version of SOS formula with similar properties. On this basis, different computational schemes for evaluating DBOC using approximate CC wave-functions are derived. One of this agrees with the formula used in the current literature. The connection to adiabatic-to-diabatic transformations in non-adiabatic dynamics is explored and complications arising from biorthogonal nature of CC theory are identified.

  12. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  13. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  14. Workshop report on large-scale matrix diagonalization methods in chemistry theory institute

    Energy Technology Data Exchange (ETDEWEB)

    Bischof, C.H.; Shepard, R.L.; Huss-Lederman, S. [eds.

    1996-10-01

    The Large-Scale Matrix Diagonalization Methods in Chemistry theory institute brought together 41 computational chemists and numerical analysts. The goal was to understand the needs of the computational chemistry community in problems that utilize matrix diagonalization techniques. This was accomplished by reviewing the current state of the art and looking toward future directions in matrix diagonalization techniques. This institute occurred about 20 years after a related meeting of similar size. During those 20 years the Davidson method continued to dominate the problem of finding a few extremal eigenvalues for many computational chemistry problems. Work on non-diagonally dominant and non-Hermitian problems as well as parallel computing has also brought new methods to bear. The changes and similarities in problems and methods over the past two decades offered an interesting viewpoint for the success in this area. One important area covered by the talks was overviews of the source and nature of the chemistry problems. The numerical analysts were uniformly grateful for the efforts to convey a better understanding of the problems and issues faced in computational chemistry. An important outcome was an understanding of the wide range of eigenproblems encountered in computational chemistry. The workshop covered problems involving self- consistent-field (SCF), configuration interaction (CI), intramolecular vibrational relaxation (IVR), and scattering problems. In atomic structure calculations using the Hartree-Fock method (SCF), the symmetric matrices can range from order hundreds to thousands. These matrices often include large clusters of eigenvalues which can be as much as 25% of the spectrum. However, if Cl methods are also used, the matrix size can be between 10{sup 4} and 10{sup 9} where only one or a few extremal eigenvalues and eigenvectors are needed. Working with very large matrices has lead to the development of

  15. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  16. Behavior of Shear Link of WF Section with Diagonal Web Stiffener of Eccentrically Braced Frame (EBF of Steel Structure

    Directory of Open Access Journals (Sweden)

    Yurisman

    2010-11-01

    Full Text Available This paper presents results of numerical and experimental study of shear link behavior, utilizing diagonal stiffener on web of steel profile to increase shear link performance in an eccentric braced frame (EBF of a steel structure system. The specimen is to examine the behavior of shear link by using diagonal stiffener on web part under static monotonic and cyclic load. The cyclic loading pattern conducted in the experiment is adjusted according to AISC loading standards 2005. Analysis was carried out using non-linear finite element method using MSC/NASTRAN software. Link was modeled as CQUAD shell element. Along the boundary of the loading area the nodal are constraint to produce only one direction loading. The length of the link in this analysis is 400mm of the steel profile of WF 200.100. Important parameters considered to effect significantly to the performance of shear link have been analyzed, namely flange and web thicknesses, , thickness and length of web stiffener, thickness of diagonal stiffener and geometric of diagonal stiffener. The behavior of shear link with diagonal web stiffener was compared with the behavior of standard link designed based on AISC 2005 criteria. Analysis results show that diagonal web stiffener is capable to increase shear link performance in terms of stiffness, strength and energy dissipation in supporting lateral load. However, differences in displacement ductility’s between shear links with diagonal stiffener and shear links based on AISC standards have not shown to be significant. Analysis results also show thickness of diagonal stiffener and geometric model of stiffener to have a significant influence on the performance of shear links. To perform validation of the numerical study, the research is followed by experimental work conducted in Structural Mechanic Laboratory Center for Industrial Engineering ITB. The Structures and Mechanics Lab rotary PAU-ITB. The experiments were carried out using three test

  17. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  18. Option Valuation with Volatility Components, Fat Tails, and Non-Monotonic Pricing Kernels

    DEFF Research Database (Denmark)

    Babaoglu, Kadir; Christoffersen, Peter; Heston, Steven L.

    We nest multiple volatility components, fat tails and a U-shaped pricing kernel in a single option model and compare their contribution to describing returns and option data. All three features lead to statistically significant model improvements. A U-shaped pricing kernel is economically most im...

  19. A Diagonal-Steering-Based Binaural Beamforming Algorithm Incorporating a Diagonal Speech Localizer for Persons With Bilateral Hearing Impairment.

    Science.gov (United States)

    Lee, Jun Chang; Nam, Kyoung Won; Jang, Dong Pyo; Kim, In Young

    2015-12-01

    Previously suggested diagonal-steering algorithms for binaural hearing support devices have commonly assumed that the direction of the speech signal is known in advance, which is not always the case in many real circumstances. In this study, a new diagonal-steering-based binaural speech localization (BSL) algorithm is proposed, and the performances of the BSL algorithm and the binaural beamforming algorithm, which integrates the BSL and diagonal-steering algorithms, were evaluated using actual speech-in-noise signals in several simulated listening scenarios. Testing sounds were recorded in a KEMAR mannequin setup and two objective indices, improvements in signal-to-noise ratio (SNRi ) and segmental SNR (segSNRi ), were utilized for performance evaluation. Experimental results demonstrated that the accuracy of the BSL was in the 90-100% range when input SNR was -10 to +5 dB range. The average differences between the γ-adjusted and γ-fixed diagonal-steering algorithms (for -15 to +5 dB input SNR) in the talking in the restaurant scenario were 0.203-0.937 dB for SNRi and 0.052-0.437 dB for segSNRi , and in the listening while car driving scenario, the differences were 0.387-0.835 dB for SNRi and 0.259-1.175 dB for segSNRi . In addition, the average difference between the BSL-turned-on and the BSL-turned-off cases for the binaural beamforming algorithm in the listening while car driving scenario was 1.631-4.246 dB for SNRi and 0.574-2.784 dB for segSNRi . In all testing conditions, the γ-adjusted diagonal-steering and BSL algorithm improved the values of the indices more than the conventional algorithms. The binaural beamforming algorithm, which integrates the proposed BSL and diagonal-steering algorithm, is expected to improve the performance of the binaural hearing support devices in noisy situations. Copyright © 2015 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  20. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  1. Self-consistent cluster theory for systems with off-diagonal disorder

    International Nuclear Information System (INIS)

    Kaplan, T.; Leath, P.L.; Gray, L.J.; Diehl, H.W.

    1980-01-01

    A self-consistent cluster theory for elementary excitations in systems with diagonal, off-diagonal, and environmental disorder is presented. The theory is developed in augmented space where the configurational average over the disorder is replaced by a ground-state matrix element in a translationally invariant system. The analyticity of the resulting approximate Green's function is proved. Numerical results for the self-consistent single-site and pair approximations are presented for the vibrational and electronic properties of disordered linear chains with diagonal, off-diagonal, and environmental disorder

  2. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  3. Kernel and divergence techniques in high energy physics separations

    Science.gov (United States)

    Bouř, Petr; Kůs, Václav; Franc, Jiří

    2017-10-01

    Binary decision trees under the Bayesian decision technique are used for supervised classification of high-dimensional data. We present a great potential of adaptive kernel density estimation as the nested separation method of the supervised binary divergence decision tree. Also, we provide a proof of alternative computing approach for kernel estimates utilizing Fourier transform. Further, we apply our method to Monte Carlo data set from the particle accelerator Tevatron at DØ experiment in Fermilab and provide final top-antitop signal separation results. We have achieved up to 82 % AUC while using the restricted feature selection entering the signal separation procedure.

  4. Tunneling splitting in double-proton transfer: direct diagonalization results for porphycene.

    Science.gov (United States)

    Smedarchina, Zorka; Siebrand, Willem; Fernández-Ramos, Antonio

    2014-11-07

    Zero-point and excited level splittings due to double-proton tunneling are calculated for porphycene and the results are compared with experiment. The calculation makes use of a multidimensional imaginary-mode Hamiltonian, diagonalized directly by an effective reduction of its dimensionality. Porphycene has a complex potential energy surface with nine stationary configurations that allow a variety of tunneling paths, many of which include classically accessible regions. A symmetry-based approach is used to show that the zero-point level, although located above the cis minimum, corresponds to concerted tunneling along a direct trans - trans path; a corresponding cis - cis path is predicted at higher energy. This supports the conclusion of a previous paper [Z. Smedarchina, W. Siebrand, and A. Fernández-Ramos, J. Chem. Phys. 127, 174513 (2007)] based on the instanton approach to a model Hamiltonian of correlated double-proton transfer. A multidimensional tunneling Hamiltonian is then generated, based on a double-minimum potential along the coordinate of concerted proton motion, which is newly evaluated at the RI-CC2/cc-pVTZ level of theory. To make it suitable for diagonalization, its dimensionality is reduced by treating fast weakly coupled modes in the adiabatic approximation. This results in a coordinate-dependent mass of tunneling, which is included in a unique Hermitian form into the kinetic energy operator. The reduced Hamiltonian contains three symmetric and one antisymmetric mode coupled to the tunneling mode and is diagonalized by a modified Jacobi-Davidson algorithm implemented in the Jadamilu software for sparse matrices. The results are in satisfactory agreement with the observed splitting of the zero-point level and several vibrational fundamentals after a partial reassignment, imposed by recently derived selection rules. They also agree well with instanton calculations based on the same Hamiltonian.

  5. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  6. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  7. Strictly diagonal holomorphic functions on Banach spaces

    Directory of Open Access Journals (Sweden)

    O. I. Fedak

    2016-01-01

    Full Text Available In this paper we investigate the boundedness of holomorphic functionals on a Banach space with a normalized basis $\\{e_n\\}$ which have a very special form $f(x=f(0+\\sum_{n=1}^\\infty c_nx_n^n$ and which we call strictly diagonal. We consider under which conditions strictly diagonal functions are entire and uniformly continuous on every ball of a fixed radius.

  8. Non-supersymmetric matrix strings from generalized Yang-Mills theory on arbitrary Riemann surfaces

    International Nuclear Information System (INIS)

    Billo, M.; D'Adda, A.; Provero, P.

    2000-01-01

    We quantize pure 2d Yang-Mills theory on an arbitrary Riemann surface in the gauge where the field strength is diagonal. Twisted sectors originate, as in Matrix string theory, from permutations of the eigenvalues around homotopically non-trivial loops. These sectors, that must be discarded in the usual quantization due to divergences occurring when two eigenvalues coincide, can be consistently kept if one modifies the action by introducing a coupling of the field strength to the space-time curvature. This leads to a generalized Yang-Mills theory whose action reduces to the usual one in the limit of zero curvature. After integrating over the non-diagonal components of the gauge fields, the theory becomes a free string theory (sum over unbranched coverings) with a U(1) gauge theory on the world-sheet. This is shown to be equivalent to a lattice theory with a gauge group which is the semi-direct product of S N and U(1) N . By using well known results on the statistics of coverings, the partition function on arbitrary Riemann surfaces and the kernel functions on surfaces with boundaries are calculated. Extensions to include branch points and non-abelian groups on the world-sheet are briefly commented upon

  9. SU-E-T-329: Dosimetric Impact of Implementing Metal Artifact Reduction Methods and Metal Energy Deposition Kernels for Photon Dose Calculations

    International Nuclear Information System (INIS)

    Huang, J; Followill, D; Howell, R; Liu, X; Mirkovic, D; Stingo, F; Kry, S

    2015-01-01

    Purpose: To investigate two strategies for reducing dose calculation errors near metal implants: use of CT metal artifact reduction methods and implementation of metal-based energy deposition kernels in the convolution/superposition (C/S) method. Methods: Radiochromic film was used to measure the dose upstream and downstream of titanium and Cerrobend implants. To assess the dosimetric impact of metal artifact reduction methods, dose calculations were performed using baseline, uncorrected images and metal artifact reduction Methods: Philips O-MAR, GE’s monochromatic gemstone spectral imaging (GSI) using dual-energy CT, and GSI imaging with metal artifact reduction software applied (MARs).To assess the impact of metal kernels, titanium and silver kernels were implemented into a commercial collapsed cone C/S algorithm. Results: The CT artifact reduction methods were more successful for titanium than Cerrobend. Interestingly, for beams traversing the metal implant, we found that errors in the dimensions of the metal in the CT images were more important for dose calculation accuracy than reduction of imaging artifacts. The MARs algorithm caused a distortion in the shape of the titanium implant that substantially worsened the calculation accuracy. In comparison to water kernel dose calculations, metal kernels resulted in better modeling of the increased backscatter dose at the upstream interface but decreased accuracy directly downstream of the metal. We also found that the success of metal kernels was dependent on dose grid size, with smaller calculation voxels giving better accuracy. Conclusion: Our study yielded mixed results, with neither the metal artifact reduction methods nor the metal kernels being globally effective at improving dose calculation accuracy. However, some successes were observed. The MARs algorithm decreased errors downstream of Cerrobend by a factor of two, and metal kernels resulted in more accurate backscatter dose upstream of metals. Thus

  10. MVDR Algorithm Based on Estimated Diagonal Loading for Beamforming

    Directory of Open Access Journals (Sweden)

    Yuteng Xiao

    2017-01-01

    Full Text Available Beamforming algorithm is widely used in many signal processing fields. At present, the typical beamforming algorithm is MVDR (Minimum Variance Distortionless Response. However, the performance of MVDR algorithm relies on the accurate covariance matrix. The MVDR algorithm declines dramatically with the inaccurate covariance matrix. To solve the problem, studying the beamforming array signal model and beamforming MVDR algorithm, we improve MVDR algorithm based on estimated diagonal loading for beamforming. MVDR optimization model based on diagonal loading compensation is established and the interval of the diagonal loading compensation value is deduced on the basis of the matrix theory. The optimal diagonal loading value in the interval is also determined through the experimental method. The experimental results show that the algorithm compared with existing algorithms is practical and effective.

  11. Working memory and individual differences in the encoding of vertical, horizontal and diagonal symmetry.

    Science.gov (United States)

    Rossi-Arnaud, Clelia; Pieroni, Laura; Spataro, Pietro; Baddeley, Alan

    2012-09-01

    Previous studies, using a modified version of the sequential Corsi block task to examine the impact of symmetry on visuospatial memory, showed an advantage of vertical symmetry over non-symmetrical sequences, but no effect of horizontal or diagonal symmetry. The present four experiments investigated the mechanisms underlying the encoding of vertical, horizontal and diagonal configurations using simultaneous presentation and a dual-task paradigm. Results indicated that the recall of vertically symmetric arrays was always better than that of all other patterns and was not influenced by any of the concurrent tasks. Performance with horizontally or diagonally symmetrical patterns differed, with high performing participants showing little effect of concurrent tasks, while low performers were disrupted by concurrent visuospatial and executive tasks. A verbal interference had no effect on either group. Implications for processes involved in the encoding of symmetry are discussed, together with the crucial importance of individual differences. Copyright © 2012 Elsevier B.V. All rights reserved.

  12. Jet energy loss in quark-gluon plasma. Kinetic theory with a Bhatnagar-Gross-Krook collisional kernel

    Energy Technology Data Exchange (ETDEWEB)

    Han, Cheng; Hou, De-fu; Li, Jia-rong [Central China Normal University, Key Laboratory of Quark and Lepton Physics (MOE) and Institute of Particle Physics, Wuhan, Hubei (China); Jiang, Bing-feng [Hubei University for Nationalities, Center for Theoretical Physics and School of Sciences, Enshi, Hubei (China)

    2017-10-15

    The dielectric functions ε{sub L}, ε{sub T} of the quark-gluon plasma (QGP) are derived within the framework of the kinetic theory with BGK-type collisional kernel. The collision effect manifested by the collision rate is encoded in the dielectric functions. Based on the derived dielectric functions we study the collisional energy loss suffered by a fast parton traveling through the QGP. The numerical results show that the collision rate increases the energy loss. (orig.)

  13. Off-diagonal ekpyrotic scenarios and equivalence of modified, massive and/or Einstein gravity

    Directory of Open Access Journals (Sweden)

    Sergiu I. Vacaru

    2016-01-01

    Full Text Available Using our anholonomic frame deformation method, we show how generic off-diagonal cosmological solutions depending, in general, on all spacetime coordinates and undergoing a phase of ultra-slow contraction can be constructed in massive gravity. In this paper, there are found and studied new classes of locally anisotropic and (inhomogeneous cosmological metrics with open and closed spatial geometries. The late time acceleration is present due to effective cosmological terms induced by nonlinear off-diagonal interactions and graviton mass. The off-diagonal cosmological metrics and related Stückelberg fields are constructed in explicit form up to nonholonomic frame transforms of the Friedmann–Lamaître–Robertson–Walker (FLRW coordinates. We show that the solutions include matter, graviton mass and other effective sources modeling nonlinear gravitational and matter fields interactions in modified and/or massive gravity, with polarization of physical constants and deformations of metrics, which may explain certain dark energy and dark matter effects. There are stated and analyzed the conditions when such configurations mimic interesting solutions in general relativity and modifications and recast the general Painlevé–Gullstrand and FLRW metrics. Finally, we elaborate on a reconstruction procedure for a subclass of off-diagonal cosmological solutions which describe cyclic and ekpyrotic universes, with an emphasis on open issues and observable signatures.

  14. Emergency Entry with One Control Torque: Non-Axisymmetric Diagonal Inertia Matrix

    Science.gov (United States)

    Llama, Eduardo Garcia

    2011-01-01

    In another work, a method was presented, primarily conceived as an emergency backup system, that addressed the problem of a space capsule that needed to execute a safe atmospheric entry from an arbitrary initial attitude and angular rate in the absence of nominal control capability. The proposed concept permits the arrest of a tumbling motion, orientation to the heat shield forward position and the attainment of a ballistic roll rate of a rigid spacecraft with the use of control in one axis only. To show the feasibility of such concept, the technique of single input single output (SISO) feedback linearization using the Lie derivative method was employed and the problem was solved for different number of jets and for different configurations of the inertia matrix: the axisymmetric inertia matrix (I(sub xx) > I(sub yy) = I(sub zz)), a partially complete inertia matrix with I(sub xx) > I(sub yy) > I(sub zz), I(sub xz) not = 0 and a realistic complete inertia matrix with I(sub xx) > I(sub yy) > I)sub zz), I(sub ij) not= 0. The closed loop stability of the proposed non-linear control on the total angle of attack, Theta, was analyzed through the zero dynamics of the internal dynamics for the case where the inertia matrix is axisymmetric (I(sub xx) > I(sub yy) = I(sub zz)). This note focuses on the problem of the diagonal non-axisymmetric inertia matrix (I(sub xx) > I(sub yy) > I(sub zz)), which is half way between the axisymmetric and the partially complete inertia matrices. In this note, the control law for this type of inertia matrix will be determined and its closed-loop stability will be analyzed using the same methods that were used in the other work. In particular, it will be proven that the control system is stable in closed-loop when the actuators only provide a roll torque.

  15. Diagonal chromatography to study plant protein modifications.

    Science.gov (United States)

    Walton, Alan; Tsiatsiani, Liana; Jacques, Silke; Stes, Elisabeth; Messens, Joris; Van Breusegem, Frank; Goormachtig, Sofie; Gevaert, Kris

    2016-08-01

    An interesting asset of diagonal chromatography, which we have introduced for contemporary proteome research, is its high versatility concerning proteomic applications. Indeed, the peptide modification or sorting step that is required between consecutive peptide separations can easily be altered and thereby allows for the enrichment of specific, though different types of peptides. Here, we focus on the application of diagonal chromatography for the study of modifications of plant proteins. In particular, we show how diagonal chromatography allows for studying proteins processed by proteases, protein ubiquitination, and the oxidation of protein-bound methionines. We discuss the actual sorting steps needed for each of these applications and the obtained results. This article is part of a Special Issue entitled: Plant Proteomics--a bridge between fundamental processes and crop production, edited by Dr. Hans-Peter Mock. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Classical limit of diagonal form factors and HHL correlators

    Energy Technology Data Exchange (ETDEWEB)

    Bajnok, Zoltan [MTA Lendület Holographic QFT Group, Wigner Research Centre,H-1525 Budapest 114, P.O.B. 49 (Hungary); Janik, Romuald A. [Institute of Physics, Jagiellonian University,ul. Łojasiewicza 11, 30-348 Kraków (Poland)

    2017-01-16

    We propose an expression for the classical limit of diagonal form factors in which we integrate the corresponding observable over the moduli space of classical solutions. In infinite volume the integral has to be regularized by proper subtractions and we present the one, which corresponds to the classical limit of the connected diagonal form factors. In finite volume the integral is finite and can be expressed in terms of the classical infinite volume diagonal form factors and subvolumes of the moduli space. We analyze carefully the periodicity properties of the finite volume moduli space and found a classical analogue of the Bethe-Yang equations. By applying the results to the heavy-heavy-light three point functions we can express their strong coupling limit in terms of the classical limit of the sine-Gordon diagonal form factors.

  17. Kernel-based whole-genome prediction of complex traits: a review.

    Science.gov (United States)

    Morota, Gota; Gianola, Daniel

    2014-01-01

    Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways), thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  18. Kernel-based whole-genome prediction of complex traits: a review

    Directory of Open Access Journals (Sweden)

    Gota eMorota

    2014-10-01

    Full Text Available Prediction of genetic values has been a focus of applied quantitative genetics since the beginning of the 20th century, with renewed interest following the advent of the era of whole genome-enabled prediction. Opportunities offered by the emergence of high-dimensional genomic data fueled by post-Sanger sequencing technologies, especially molecular markers, have driven researchers to extend Ronald Fisher and Sewall Wright's models to confront new challenges. In particular, kernel methods are gaining consideration as a regression method of choice for genome-enabled prediction. Complex traits are presumably influenced by many genomic regions working in concert with others (clearly so when considering pathways, thus generating interactions. Motivated by this view, a growing number of statistical approaches based on kernels attempt to capture non-additive effects, either parametrically or non-parametrically. This review centers on whole-genome regression using kernel methods applied to a wide range of quantitative traits of agricultural importance in animals and plants. We discuss various kernel-based approaches tailored to capturing total genetic variation, with the aim of arriving at an enhanced predictive performance in the light of available genome annotation information. Connections between prediction machines born in animal breeding, statistics, and machine learning are revisited, and their empirical prediction performance is discussed. Overall, while some encouraging results have been obtained with non-parametric kernels, recovering non-additive genetic variation in a validation dataset remains a challenge in quantitative genetics.

  19. Effect of mixing scanner types and reconstruction kernels on the characterization of lung parenchymal pathologies: emphysema, interstitial pulmonary fibrosis and normal non-smokers

    Science.gov (United States)

    Xu, Ye; van Beek, Edwin J.; McLennan, Geoffrey; Guo, Junfeng; Sonka, Milan; Hoffman, Eric

    2006-03-01

    In this study we utilize our texture characterization software (3-D AMFM) to characterize interstitial lung diseases (including emphysema) based on MDCT generated volumetric data using 3-dimensional texture features. We have sought to test whether the scanner and reconstruction filter (kernel) type affect the classification of lung diseases using the 3-D AMFM. We collected MDCT images in three subject groups: emphysema (n=9), interstitial pulmonary fibrosis (IPF) (n=10), and normal non-smokers (n=9). In each group, images were scanned either on a Siemens Sensation 16 or 64-slice scanner, (B50f or B30 recon. kernel) or a Philips 4-slice scanner (B recon. kernel). A total of 1516 volumes of interest (VOIs; 21x21 pixels in plane) were marked by two chest imaging experts using the Iowa Pulmonary Analysis Software Suite (PASS). We calculated 24 volumetric features. Bayesian methods were used for classification. Images from different scanners/kernels were combined in all possible combinations to test how robust the tissue classification was relative to the differences in image characteristics. We used 10-fold cross validation for testing the result. Sensitivity, specificity and accuracy were calculated. One-way Analysis of Variances (ANOVA) was used to compare the classification result between the various combinations of scanner and reconstruction kernel types. This study yielded a sensitivity of 94%, 91%, 97%, and 93% for emphysema, ground-glass, honeycombing, and normal non-smoker patterns respectively using a mixture of all three subject groups. The specificity for these characterizations was 97%, 99%, 99%, and 98%, respectively. The F test result of ANOVA shows there is no significant difference (p <0.05) between different combinations of data with respect to scanner and convolution kernel type. Since different MDCT and reconstruction kernel types did not show significant differences in regards to the classification result, this study suggests that the 3-D AMFM can

  20. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  1. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  2. Off-diagonal deformations of Kerr metrics and black ellipsoids in heterotic supergravity

    International Nuclear Information System (INIS)

    Vacaru, Sergiu I.; Irwin, Klee

    2017-01-01

    Geometric methods for constructing exact solutions of equations of motion with first order α ' corrections to the heterotic supergravity action implying a nontrivial Yang-Mills sector and six-dimensional, 6-d, almost-Kaehler internal spaces are studied. In 10-d spacetimes, general parametrizations for generic off-diagonal metrics, nonlinear and linear connections, and matter sources, when the equations of motion decouple in very general forms are considered. This allows us to construct a variety of exact solutions when the coefficients of fundamental geometric/physical objects depend on all higher-dimensional spacetime coordinates via corresponding classes of generating and integration functions, generalized effective sources and integration constants. Such generalized solutions are determined by generic off-diagonal metrics and nonlinear and/or linear connections; in particular, as configurations which are warped/compactified to lower dimensions and for Levi-Civita connections. The corresponding metrics can have (non-) Killing and/or Lie algebra symmetries and/or describe (1+2)-d and/or (1+3)-d domain wall configurations, with possible warping nearly almost-Kaehler manifolds, with gravitational and gauge instantons for nonlinear vacuum configurations and effective polarizations of cosmological and interaction constants encoding string gravity effects. A series of examples of exact solutions describing generic off-diagonal supergravity modifications to black hole/ellipsoid and solitonic configurations are provided and analyzed. We prove that it is possible to reproduce the Kerr and other type black solutions in general relativity (with certain types of string corrections) in the 4-d case and to generalize the solutions to non-vacuum configurations in (super-) gravity/string theories. (orig.)

  3. Off-diagonal deformations of Kerr metrics and black ellipsoids in heterotic supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Vacaru, Sergiu I. [Quantum Gravity Research, Topanga, CA (United States); University ' ' Al. I. Cuza' ' , Project IDEI, Iasi (Romania); Irwin, Klee [Quantum Gravity Research, Topanga, CA (United States)

    2017-01-15

    Geometric methods for constructing exact solutions of equations of motion with first order α{sup '} corrections to the heterotic supergravity action implying a nontrivial Yang-Mills sector and six-dimensional, 6-d, almost-Kaehler internal spaces are studied. In 10-d spacetimes, general parametrizations for generic off-diagonal metrics, nonlinear and linear connections, and matter sources, when the equations of motion decouple in very general forms are considered. This allows us to construct a variety of exact solutions when the coefficients of fundamental geometric/physical objects depend on all higher-dimensional spacetime coordinates via corresponding classes of generating and integration functions, generalized effective sources and integration constants. Such generalized solutions are determined by generic off-diagonal metrics and nonlinear and/or linear connections; in particular, as configurations which are warped/compactified to lower dimensions and for Levi-Civita connections. The corresponding metrics can have (non-) Killing and/or Lie algebra symmetries and/or describe (1+2)-d and/or (1+3)-d domain wall configurations, with possible warping nearly almost-Kaehler manifolds, with gravitational and gauge instantons for nonlinear vacuum configurations and effective polarizations of cosmological and interaction constants encoding string gravity effects. A series of examples of exact solutions describing generic off-diagonal supergravity modifications to black hole/ellipsoid and solitonic configurations are provided and analyzed. We prove that it is possible to reproduce the Kerr and other type black solutions in general relativity (with certain types of string corrections) in the 4-d case and to generalize the solutions to non-vacuum configurations in (super-) gravity/string theories. (orig.)

  4. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  5. Nutrition quality of extraction mannan residue from palm kernel cake on brolier chicken

    Science.gov (United States)

    Tafsin, M.; Hanafi, N. D.; Kejora, E.; Yusraini, E.

    2018-02-01

    This study aims to find out the nutrient residue of palm kernel cake from mannan extraction on broiler chicken by evaluating physical quality (specific gravity, bulk density and compacted bulk density), chemical quality (proximate analysis and Van Soest Test) and biological test (metabolizable energy). Treatment composed of T0 : palm kernel cake extracted aquadest (control), T1 : palm kernel cake extracted acetic acid (CH3COOH) 1%, T2 : palm kernel cake extracted aquadest + mannanase enzyme 100 u/l and T3 : palm kernel cake extracted acetic acid (CH3COOH) 1% + enzyme mannanase 100 u/l. The results showed that mannan extraction had significant effect (P<0.05) in improving the quality of physical and numerically increase the value of crude protein and decrease the value of NDF (Neutral Detergent Fiber). Treatments had highly significant influence (P<0.01) on the metabolizable energy value of palm kernel cake residue in broiler chickens. It can be concluded that extraction with aquadest + enzyme mannanase 100 u/l yields the best nutrient quality of palm kernel cake residue for broiler chicken.

  6. Separability of three qubit Greenberger-Horne-Zeilinger diagonal states

    Science.gov (United States)

    Han, Kyung Hoon; Kye, Seung-Hyeok

    2017-04-01

    We characterize the separability of three qubit GHZ diagonal states in terms of entries. This enables us to check separability of GHZ diagonal states without decomposition into the sum of pure product states. In the course of discussion, we show that the necessary criterion of Gühne (2011 Entanglement criteria and full separability of multi-qubit quantum states Phys. Lett. A 375 406-10) for (full) separability of three qubit GHZ diagonal states is sufficient with a simpler formula. The main tool is to use entanglement witnesses which are tri-partite Choi matrices of positive bi-linear maps.

  7. Co-inoculation of aflatoxigenic and non-aflatoxigenic strains of Aspergillus flavus to study fungal invasion, colonization, and competition in maize kernels

    Directory of Open Access Journals (Sweden)

    Zuzana eHruska

    2014-03-01

    Full Text Available A currently utilized pre-harvest biocontrol method involves field inoculations with non-aflatoxigenic Aspergillus flavus strains, a tactic shown to strategically suppress native aflatoxin-producing strains and effectively decrease aflatoxin contamination in corn. The present in situ study focuses on tracking the invasion and colonization of an aflatoxigenic A. flavus strain (AF70, labeled with green fluorescent protein (GFP, in the presence of a non-aflatoxigenic A. flavus biocontrol strain (AF36 to better understand the competitive interaction between these two strains in seed tissue of corn (Zea mays. Corn kernels that had been co-inoculated with GFP-labeled AF70 and wild-type AF36 were cross-sectioned and observed under UV and blue light to determine the outcome of competition between these strains. After imaging, all kernels were analyzed for aflatoxin levels. There appeared to be a population difference between the co-inoculated AF70-GFP+AF36 and the individual AF70-GFP tests, both visually and with pixel count analysis. The GFP allowed us to observe that AF70-GFP inside the kernels was suppressed up to 82% when co-inoculated with AF36 indicating that AF36 inhibited progression of AF70-GFP. This was in agreement with images taken of whole kernels where AF36 exhibited a more robust external growth compared to AF70-GFP. The suppressed growth of AF70-GFP was reflected in a corresponding (up to 73% suppression in aflatoxin levels. Our results indicate that the decrease in aflatoxin production correlated with population depression of the aflatoxigenic fungus by the biocontrol strain supporting the theory of competitive exclusion through robust propagation and fast colonization by the non-aflatoxigenic fungus.

  8. Flour quality and kernel hardness connection in winter wheat

    Directory of Open Access Journals (Sweden)

    Szabó B. P.

    2016-12-01

    Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.

  9. Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.

    Science.gov (United States)

    Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao

    2017-06-21

    In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.

  10. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  11. Computational Lower Bounds Using Diagonalization

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 14; Issue 7. Computational Lower Bounds Using Diagonalization - Languages, Turing Machines and Complexity Classes. M V Panduranga Rao. General Article Volume 14 Issue 7 July 2009 pp 682-690 ...

  12. Finite-Time Attractivity for Diagonally Dominant Systems with Off-Diagonal Delays

    Directory of Open Access Journals (Sweden)

    T. S. Doan

    2012-01-01

    Full Text Available We introduce a notion of attractivity for delay equations which are defined on bounded time intervals. Our main result shows that linear delay equations are finite-time attractive, provided that the delay is only in the coupling terms between different components, and the system is diagonally dominant. We apply this result to a nonlinear Lotka-Volterra system and show that the delay is harmless and does not destroy finite-time attractivity.

  13. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  14. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  15. Quantum Monte Carlo diagonalization method as a variational calculation

    International Nuclear Information System (INIS)

    Mizusaki, Takahiro; Otsuka, Takaharu; Honma, Michio.

    1997-01-01

    A stochastic method for performing large-scale shell model calculations is presented, which utilizes the auxiliary field Monte Carlo technique and diagonalization method. This method overcomes the limitation of the conventional shell model diagonalization and can extremely widen the feasibility of shell model calculations with realistic interactions for spectroscopic study of nuclear structure. (author)

  16. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  17. Diagonalization and Jordan Normal Form--Motivation through "Maple"[R

    Science.gov (United States)

    Glaister, P.

    2009-01-01

    Following an introduction to the diagonalization of matrices, one of the more difficult topics for students to grasp in linear algebra is the concept of Jordan normal form. In this note, we show how the important notions of diagonalization and Jordan normal form can be introduced and developed through the use of the computer algebra package…

  18. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    Science.gov (United States)

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  19. Exact diagonalization of the D-dimensional spatially confined quantum harmonic oscillator

    Directory of Open Access Journals (Sweden)

    Kunle Adegoke

    2016-01-01

    Full Text Available In the existing literature various numerical techniques have been developed to quantize the confined harmonic oscillator in higher dimensions. In obtaining the energy eigenvalues, such methods often involve indirect approaches such as searching for the roots of hypergeometric functions or numerically solving a differential equation. In this paper, however, we derive an explicit matrix representation for the Hamiltonian of a confined quantum harmonic oscillator in higher dimensions, thus facilitating direct diagonalization.

  20. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  1. Explicit signal to noise ratio in reproducing kernel Hilbert spaces

    DEFF Research Database (Denmark)

    Gomez-Chova, Luis; Nielsen, Allan Aasbjerg; Camps-Valls, Gustavo

    2011-01-01

    This paper introduces a nonlinear feature extraction method based on kernels for remote sensing data analysis. The proposed approach is based on the minimum noise fraction (MNF) transform, which maximizes the signal variance while also minimizing the estimated noise variance. We here propose...... an alternative kernel MNF (KMNF) in which the noise is explicitly estimated in the reproducing kernel Hilbert space. This enables KMNF dealing with non-linear relations between the noise and the signal features jointly. Results show that the proposed KMNF provides the most noise-free features when confronted...

  2. On the energy-momentum tensor in non-linear σ-models with torsion

    International Nuclear Information System (INIS)

    Dorn, H.; Otto, H.J.

    1987-10-01

    We study the renormalization properties of the energy-momentum tensor in a σ-model with torsion. Our normal product version contains besides the classical expression and the trace anomaly an off diagonal term proportional to the squared torsion. Specialized to a group manifold this term is crucial to reproduce the correct perturbative expansion of the energy-momentum tensor in Sugawara form. (orig.)

  3. Diagonally arranged louvers in integrated facade systems - effects on the interior lighting environment

    Directory of Open Access Journals (Sweden)

    Yutaka Misawa

    2015-06-01

    Full Text Available Building facades play an important role in creating the urban landscape and they can be used effectively to reduce energy usage and environmental impacts, while also incorporating structural seismic-resistant elements in the building perimeter zone. To address these opportunities, the authors propose an integrated facade concept which satisfies architectural facade and environmental design requirements. In Europe, remarkable facade engineering developments have taken place over the last two decades resulting in elegant facades and a reduction in environmental impact; however modifications are needed in Japan to take account of the different seismic and environmental situations. To satisfy these requirements, this paper proposes the use of a diagonally disposed louver system. Diagonally arranged louvers have the potential to provide both seismic resistance and environment adaptation. In many cases, louvers have been designed but not installed due to concerns relating to restricted external sight lines and low levels of natural lighting in the building interior. To overcome these problems, full-scale diagonally arranged louver mock-ups were created to evaluate illumination levels, the quality of the internal daylight environment and external appearance. Interior illumination levels resulting from a series of mock-up experiments were evaluated and correlated with results from a daylight analysis tool.

  4. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  5. A Experimental Study of the Growth of Laser Spark and Electric Spark Ignited Flame Kernels.

    Science.gov (United States)

    Ho, Chi Ming

    1995-01-01

    Better ignition sources are constantly in demand for enhancing the spark ignition in practical applications such as automotive and liquid rocket engines. In response to this practical challenge, the present experimental study was conducted with the major objective to obtain a better understanding on how spark formation and hence spark characteristics affect the flame kernel growth. Two laser sparks and one electric spark were studied in air, propane-air, propane -air-nitrogen, methane-air, and methane-oxygen mixtures that were initially at ambient pressure and temperature. The growth of the kernels was monitored by imaging the kernels with shadowgraph systems, and by imaging the planar laser -induced fluorescence of the hydroxyl radicals inside the kernels. Characteristic dimensions and kernel structures were obtained from these images. Since different energy transfer mechanisms are involved in the formation of a laser spark as compared to that of an electric spark; a laser spark is insensitive to changes in mixture ratio and mixture type, while an electric spark is sensitive to changes in both. The detailed structures of the kernels in air and propane-air mixtures primarily depend on the spark characteristics. But the combustion heat released rapidly in methane-oxygen mixtures significantly modifies the kernel structure. Uneven spark energy distribution causes remarkably asymmetric kernel structure. The breakdown energy of a spark creates a blast wave that shows good agreement with the numerical point blast solution, and a succeeding complex spark-induced flow that agrees reasonably well with a simple puff model. The transient growth rates of the propane-air, propane-air -nitrogen, and methane-air flame kernels can be interpreted in terms of spark effects, flame stretch, and preferential diffusion. For a given mixture, a spark with higher breakdown energy produces a greater and longer-lasting enhancing effect on the kernel growth rate. By comparing the growth

  6. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  7. On the calculation of the eigenvalues of the Faddeev equation kernel on the nonphysical sheet of energy

    International Nuclear Information System (INIS)

    Moeller, K.

    1978-01-01

    A system of three particles is considered which interacts by rank-1 separable potential. For the Faddeev equation kernel of this system a method is proposed for calculating the eigenvalues on the nonphysical sheet of the three-particle cms-energy. From the consideration of the analytical structure of the eigenvalues in the energy plane it follows that the analytical continuations of the eigenvalues from the physical to the nonphysical region are different above and below the three-particle threshold. In this paper the continuation below the threshold is discussed. (author)

  8. Diagonalization of propagators in thermo field dynamics for relativistic quantum fields

    International Nuclear Information System (INIS)

    Henning, P.A.; Umezawa, H.

    1992-09-01

    Two-point functions for interacting quantum fields in statistical systems can be diagnolized by matrix transformations. It is shown, that within the framework of time-dependent Thermo Field Dynamics this diagonalization can be understood as a thermal Bogoliubov transformation to non-interacting statistical quasi-particles. The condition for their unperturbed propagation relates these states to the thermodynamic properties of the system: It requires global equilibrium for stationary situations, or specifies the time evolution according to a kinetic equation. (orig.)

  9. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  10. Hopping transport and electrical conductivity in one-dimensional systems with off-diagonal disorder

    International Nuclear Information System (INIS)

    Ma Songshan; Xu Hui; Li Yanfeng; Song Zhaoquan

    2007-01-01

    In this paper, we present a model to describe hopping transport and electrical conductivity of one-dimensional systems with off-diagonal disorder, in which electrons are transported via hopping between localized states. We find that off-diagonal disorder leads to delocalization and drastically enhances the electrical conductivity of systems. The model also quantitatively explains the temperature and electrical field dependence of the conductivity in one-dimensional systems with off-diagonal disorder. In addition, we also show the dependence of the conductivity on the strength of off-diagonal disorder

  11. Theory and applications of generalized operator transforms for diagonalization of spin hamiltonians

    International Nuclear Information System (INIS)

    Schweiger, A.; Graf, F.; Rist, G.; Guenthard, Hs.H.

    1976-01-01

    A generalized transform formalism for vector operators is devised for diagonalization of a rather wide class of spin hamiltonians. The operator technique leads to equations for transformation matrices, for which analytical solutions are given. These allow analytical formulation of the transformed electron Zeeman term, the sum of the magnetic hyperfine and nuclear Zeeman term, the electric quadrupole term and the electronic and nuclear Zeeman coupling terms. The angular dependence of energy eigenvalues, frequencies and line strengths of ESR and ENDOR transitions to first order will be expressed as compact bilinear and quadratic forms of the columns of the matrix relating the molecular coordinate system to the laboratory system. Thereby the explicit calculation of rotation matrices may be completely avoided, though the latter formally express the operator transforms. The generalized operator transform is also carried out for the off-diagonal blocks originating from hyperfine interaction terms. This allows the second order energy terms to be expressed explicitly as compact hermitean forms of a simple structure, in particular the explicit structure of mixing terms between hyperfine interactions of different (sets of) nuclei is obtained. The relationship to the conventional Bleaney transform is discussed and the analogy to the generalized operator transform is worked out. (Auth.)

  12. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  13. Diagonal Limit for Conformal Blocks in d Dimensions

    CERN Document Server

    Hogervorst, Matthijs; Rychkov, Slava

    2013-01-01

    Conformal blocks in any number of dimensions depend on two variables z, zbar. Here we study their restrictions to the special "diagonal" kinematics z = zbar, previously found useful as a starting point for the conformal bootstrap analysis. We show that conformal blocks on the diagonal satisfy ordinary differential equations, third-order for spin zero and fourth-order for the general case. These ODEs determine the blocks uniquely and lead to an efficient numerical evaluation algorithm. For equal external operator dimensions, we find closed-form solutions in terms of finite sums of 3F2 functions.

  14. Renormalon chains contributions to the non-singlet evolution kernels in [φ3]6 and QCD

    International Nuclear Information System (INIS)

    Mikhajlov, S.V.

    1997-01-01

    The contributions to non-singlet evolution kernels P (z) for the DGLAP equation and V (x,y) for the Brodsky-Lepage evolution equation are calculated for certain classes of diagrams which include the renormalon chains. Closed expressions are obtained for the sums of contributions associated with these diagram classes. Calculations are performed in the [φ 3 ] 6 model and QCD in the M bar S bar scheme. The contribution for one of the classes of diagrams dominates for a number of flavors N f >>1. For the latter case, a simple solution to the Brodsky-Lepage evolution equation is obtained

  15. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    Science.gov (United States)

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  17. Diagonalization of the mass matrices

    International Nuclear Information System (INIS)

    Rhee, S.S.

    1984-01-01

    It is possible to make 20 types of 3x3 mass matrices which are hermitian. We have obtained unitary matrices which could diagonalize each mass matrix. Since the three elements of mass matrix can be expressed in terms of the three eigenvalues, msub(i), we can also express the unitary matrix in terms of msub(i). (Author)

  18. Growth, 14C-sucrose uptake, and metabolites of starch synthesis in apical and basal kernels of corn (Zea mays L.)

    International Nuclear Information System (INIS)

    Greenberg, J.M.

    1985-01-01

    Developing field-grown kernels of corn (Zea mays L. cv. Cornell 175) from the base and apex of the ear were sampled from seven to 70 days after pollination (DAP) an compared with respect to dry weight, ability to take up 14 C-sucrose from solution in vitro, and content of sucrose, glucose, starch, glucose-1-P (G1P), glucose-6-P (G6P), fructose-6-P (F6P), ADP-glucose (ADPG), and UDP-glucose (UDPG). ADPG and UDPG were analyzed by HPLC. All other metabolites were analyzed enzymatically. Simultaneous hand-pollination of all ovaries in an ear did not reduce the difference between apical and basal kernels in dry weight, indicating that the latter fertilization of apical kernels was not responsible for their lesser mature dry weight. Detached kernels took up 14 C-sucrose (0.3-400 mM) and glucose (5-100 mM) at rates linearly proportional to the sugar concentration. Glucose, fructose, and sorbitol did not inhibit uptake of 14 C-sucrose. Uptake was not stimulated by 5 mM CaCl 2 or the addition of buffers (pH 4.5-6.7) to the medium. Sulfhydryl reagents (PCMBS, NEM) and metabolic inhibitors (TNBS, DNP, NaF) did not reduce uptake. These observations suggest that sucrose is taken up by a non-saturable, non-energy-requiring mechanism. Sucrose uptake increased throughout development, especially at the stage when basal kernels began to accumulate more dry weight than apical kernels (10-20 DAP in freely pollinated ears; 25 DAP in synchronously pollinated ears). Hydrolysis of incorporated sucrose increased from 87% at 14 DAP to 99% by 57 DAP

  19. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  20. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    Science.gov (United States)

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  1. Off-diagonal helicity density matrix elements for vector mesons produced at LEP

    International Nuclear Information System (INIS)

    Anselmino, M.; Bertini, M.; Quintairos, P.

    1997-05-01

    Final state q q-bar interactions may give origin to non zero values of the off-diagonal element ρ 1 of the helicity density matrix of vector mesons produced in e + e - annihilations, as confirmed by recent OPAL data on φ and D * 's. Predictions are given for ρ1,-1 of several mesons produced at large z and small PT, collinear with the parent jet; the values obtained for θ and D * are in agreement with data. (author)

  2. Mixed kernel function support vector regression for global sensitivity analysis

    Science.gov (United States)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  3. Biomechanical pole and leg characteristics during uphill diagonal roller skiing.

    Science.gov (United States)

    Lindinger, Stefan Josef; Göpfert, Caroline; Stöggl, Thomas; Müller, Erich; Holmberg, Hans-Christer

    2009-11-01

    Diagonal skiing as a major classical technique has hardly been investigated over the last two decades, although technique and racing velocities have developed substantially. The aims of the present study were to 1) analyse pole and leg kinetics and kinematics during submaximal uphill diagonal roller skiing and 2) identify biomechanical factors related to performance. Twelve elite skiers performed a time to exhaustion (performance) test on a treadmill. Joint kinematics and pole/plantar forces were recorded separately during diagonal roller skiing (9 degrees; 11 km/h). Performance was correlated to cycle length (r = 0.77; P Push-off demonstrated performance correlations for impulse of leg force (r = 0.84), relative duration (r= -0.76) and knee flexion (r = 0.73) and extension ROM (r = 0.74). Relative time to peak pole force was associated with performance (r = 0.73). In summary, diagonal roller skiing performance was linked to 1) longer cycle length, 2) greater impulse of force during a shorter push-off with larger flexion/extension ROMs in leg joints, 3) longer leg swing, and 4) later peak pole force, demonstrating the major key characteristics to be emphasised in training.

  4. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  5. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  6. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  7. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  8. Off-diagonal generalization of the mixed-state geometric phase

    International Nuclear Information System (INIS)

    Filipp, Stefan; Sjoeqvist, Erik

    2003-01-01

    The concept of off-diagonal geometric phases for mixed quantal states in unitary evolution is developed. We show that these phases arise from three basic ideas: (1) fulfillment of quantum parallel transport of a complete basis, (2) a concept of mixed-state orthogonality adapted to unitary evolution, and (3) a normalization condition. We provide a method for computing the off-diagonal mixed-state phases to any order for unitarities that divide the parallel transported basis of Hilbert space into two parts: one part where each basis vector undergoes cyclic evolution and one part where all basis vectors are permuted among each other. We also demonstrate a purification based experimental procedure for the two lowest-order mixed-state phases and consider a physical scenario for a full characterization of the qubit mixed-state geometric phases in terms of polarization-entangled photon pairs. An alternative second order off-diagonal mixed-state geometric phase, which can be tested in single-particle experiments, is proposed

  9. Overview of real-time kernels at the Superconducting Super Collider Laboratory

    International Nuclear Information System (INIS)

    Low, K.; Acharya, S.; Allen, M.; Faught, E.; Haenni, D.; Kalbfleisch, C.

    1991-01-01

    The Superconducting Super Collider Laboratory (SSCL) will have many subsystems that will require real-time microprocessor control. Examples of such Sub-systems requiring real-time controls are power supply ramp generators and quench protection monitors for the superconducting magnets. The authors plan on using a commercial multitasking real-time kernel in these systems. These kernels must perform in a consistent, reliable and efficient manner. Actual performance measurements have been conducted on four different kernels, all running on the same hardware platform. The measurements fall into two categories. Throughput measurements covering the 'non-real-time' aspects of the kernel include process creation/termination times, interprocess communication facilities involving messages, semaphores and shared memory and memory allocation/deallocation. Measurements concentrating on real-time response are context switch times, interrupt latencies and interrupt task response

  10. Overview of real-time kernels at the Superconducting Super Collider Laboratory

    International Nuclear Information System (INIS)

    Low, K.; Acharya, S.; Allen, M.; Faught, E.; Haenni, D.; Kalbfleisch, C.

    1991-05-01

    The Superconducting Super Collider Laboratory (SSCL) will have many subsystems that will require real-time microprocessor control. Examples of such sub-systems requiring real-time controls are power supply ramp generators and quench protection monitors for the superconducting magnets. We plan on using a commercial multitasking real-time kernel in these systems. These kernels must perform in a consistent, reliable and efficient manner. Actual performance measurements have been conducted on four different kernels, all running on the same hardware platform. The measurements fall into two categories. Throughput measurements covering the ''non-real-time'' aspects of the kernel include process creation/termination times, interprocess communication facilities involving messages, semaphores and shared memory and memory allocation/deallocation. Measurements concentrating on real-time response are context switch times, interrupt latencies and interrupt task response. 6 refs., 2 tabs

  11. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  12. Numerical Aspects of Atomic Physics: Helium Basis Sets and Matrix Diagonalization

    Science.gov (United States)

    Jentschura, Ulrich; Noble, Jonathan

    2014-03-01

    We present a matrix diagonalization algorithm for complex symmetric matrices, which can be used in order to determine the resonance energies of auto-ionizing states of comparatively simple quantum many-body systems such as helium. The algorithm is based in multi-precision arithmetic and proceeds via a tridiagonalization of the complex symmetric (not necessarily Hermitian) input matrix using generalized Householder transformations. Example calculations involving so-called PT-symmetric quantum systems lead to reference values which pertain to the imaginary cubic perturbation (the imaginary cubic anharmonic oscillator). We then proceed to novel basis sets for the helium atom and present results for Bethe logarithms in hydrogen and helium, obtained using the enhanced numerical techniques. Some intricacies of ``canned'' algorithms such as those used in LAPACK will be discussed. Our algorithm, for complex symmetric matrices such as those describing cubic resonances after complex scaling, is faster than LAPACK's built-in routines, for specific classes of input matrices. It also offer flexibility in terms of the calculation of the so-called implicit shift, which is used in order to ``pivot'' the system toward the convergence to diagonal form. We conclude with a wider overview.

  13. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  14. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  15. Kinematic approach to off-diagonal geometric phases of nondegenerate and degenerate mixed states

    International Nuclear Information System (INIS)

    Tong, D.M.; Oh, C.H.; Sjoeqvist, Erik; Filipp, Stefan; Kwek, L.C.

    2005-01-01

    Off-diagonal geometric phases have been developed in order to provide information of the geometry of paths that connect noninterfering quantal states. We propose a kinematic approach to off-diagonal geometric phases for pure and mixed states. We further extend the mixed-state concept proposed in [Phys. Rev. Lett. 90, 050403 (2003)] to degenerate density operators. The first- and second-order off-diagonal geometric phases are analyzed for unitarily evolving pairs of pseudopure states

  16. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  17. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  18. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  19. Non-standard interactions with high-energy atmospheric neutrinos at IceCube

    Energy Technology Data Exchange (ETDEWEB)

    Salvado, Jordi; Mena, Olga; Palomares-Ruiz, Sergio; Rius, Nuria [Instituto de Física Corpuscular (IFIC), CSIC-Universitat de València,Apartado de Correos 22085, E-46071 Valencia (Spain)

    2017-01-31

    Non-standard interactions in the propagation of neutrinos in matter can lead to significant deviations from expectations within the standard neutrino oscillation framework and atmospheric neutrino detectors have been considered to set constraints. However, most previous works have focused on relatively low-energy atmospheric neutrino data. Here, we consider the one-year high-energy through-going muon data in IceCube, which has been already used to search for light sterile neutrinos, to constrain new interactions in the μτ-sector. In our analysis we include several systematic uncertainties on both, the atmospheric neutrino flux and on the detector properties, which are accounted for via nuisance parameters. After considering different primary cosmic-ray spectra and hadronic interaction models, we improve over previous analysis by using the latest data and showing that systematics currently affect very little the bound on the off-diagonal ε{sub μτ}, with the 90% credible interval given by −6.0×10{sup −3}<ε{sub μτ}<5.4×10{sup −3}, comparable to previous results. In addition, we also estimate the expected sensitivity after 10 years of collected data in IceCube and study the precision at which non-standard parameters could be determined for the case of ε{sub μτ} near its current bound.

  20. On the states with positive energy which result from the hamiltonian diagonalization on the oscillator basis

    International Nuclear Information System (INIS)

    Filippov, G.F.; Chopovsky, L.L.; Vasilevsky, V.S.

    1982-01-01

    The states of continuous spectrum in a system of two interacting clusters are studied. It is shown that the Hamiltonian diagonalization on the oscillator basis isolates those states in a continuous spectrum whose amplitudes have a node at a certain number of oscillator quanta. As an example the interaction of the 4 He and 3 H nuclei is considered. These nuclei form a coupled system - 7 Li

  1. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  2. A new three-dimensional equivalent circuit of diagonal type MHD generator

    International Nuclear Information System (INIS)

    Yoshida, Masahrau; Komaya, Kiyotoshi; Umoto, Juro

    1979-01-01

    For a large scale diagonal type generator with oil combustion gas plasma, a new three-dimensional equivalent circuit is proposed, in which threre are considered the leakage resistance of the duct insulator surface, the boundary layer, the ion slip, the effect of the finite electrode segmentation etc. Next, through the relation between the Hall voltage per one electrode pitch region and the load current obtained by use of the equivalent circuit, a suitable size and number of the space elements per region and determined. Further, by comparing in detail the electrical performances of two types of the diagonal generators with diagonal conducting and insulating sidewalls, three-dimensional effects of the sidewalls are discussed. (author)

  3. Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction

    Science.gov (United States)

    Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc

    2018-02-01

    Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.

  4. Direct Kernel Perceptron (DKP): ultra-fast kernel ELM-based classification with non-iterative closed-form weight calculation.

    Science.gov (United States)

    Fernández-Delgado, Manuel; Cernadas, Eva; Barro, Senén; Ribeiro, Jorge; Neves, José

    2014-02-01

    The Direct Kernel Perceptron (DKP) (Fernández-Delgado et al., 2010) is a very simple and fast kernel-based classifier, related to the Support Vector Machine (SVM) and to the Extreme Learning Machine (ELM) (Huang, Wang, & Lan, 2011), whose α-coefficients are calculated directly, without any iterative training, using an analytical closed-form expression which involves only the training patterns. The DKP, which is inspired by the Direct Parallel Perceptron, (Auer et al., 2008), uses a Gaussian kernel and a linear classifier (perceptron). The weight vector of this classifier in the feature space minimizes an error measure which combines the training error and the hyperplane margin, without any tunable regularization parameter. This weight vector can be translated, using a variable change, to the α-coefficients, and both are determined without iterative calculations. We calculate solutions using several error functions, achieving the best trade-off between accuracy and efficiency with the linear function. These solutions for the α coefficients can be considered alternatives to the ELM with a new physical meaning in terms of error and margin: in fact, the linear and quadratic DKP are special cases of the two-class ELM when the regularization parameter C takes the values C=0 and C=∞. The linear DKP is extremely efficient and much faster (over a vast collection of 42 benchmark and real-life data sets) than 12 very popular and accurate classifiers including SVM, Multi-Layer Perceptron, Adaboost, Random Forest and Bagging of RPART decision trees, Linear Discriminant Analysis, K-Nearest Neighbors, ELM, Probabilistic Neural Networks, Radial Basis Function neural networks and Generalized ART. Besides, despite its simplicity and extreme efficiency, DKP achieves higher accuracies than 7 out of 12 classifiers, exhibiting small differences with respect to the best ones (SVM, ELM, Adaboost and Random Forest), which are much slower. Thus, the DKP provides an easy and fast way

  5. Diagonalization of quark mass matrices and the Cabibbo-Kobayashi-Maskawa matrix

    International Nuclear Information System (INIS)

    Rasin, A.

    1997-08-01

    I discuss some general aspect of diagonalizing the quark mass matrices and list all possible parametrizations of the Cabibbo-Kobayashi-Maskawa matrix (CKM) in terms of three rotation angles and a phase. I systematically study the relation between the rotations needed to diagonalize the Yukawa matrices and various parametrizations of the CKM. (author). 17 refs, 1 tab

  6. Wilson Dslash Kernel From Lattice QCD Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Joo, Balint [Jefferson Lab, Newport News, VA; Smelyanskiy, Mikhail [Parallel Computing Lab, Intel Corporation, California, USA; Kalamkar, Dhiraj D. [Parallel Computing Lab, Intel Corporation, India; Vaidyanathan, Karthikeyan [Parallel Computing Lab, Intel Corporation, India

    2015-07-01

    Lattice Quantum Chromodynamics (LQCD) is a numerical technique used for calculations in Theoretical Nuclear and High Energy Physics. LQCD is traditionally one of the first applications ported to many new high performance computing architectures and indeed LQCD practitioners have been known to design and build custom LQCD computers. Lattice QCD kernels are frequently used as benchmarks (e.g. 168.wupwise in the SPEC suite) and are generally well understood, and as such are ideal to illustrate several optimization techniques. In this chapter we will detail our work in optimizing the Wilson-Dslash kernels for Intel Xeon Phi, however, as we will show the technique gives excellent performance on regular Xeon Architecture as well.

  7. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  8. Virial expansion for almost diagonal random matrices

    International Nuclear Information System (INIS)

    Yevtushenko, Oleg; Kravtsov, Vladimir E

    2003-01-01

    Energy level statistics of Hermitian random matrices H-circumflex with Gaussian independent random entries H i≥j is studied for a generic ensemble of almost diagonal random matrices with (vertical bar H ii vertical bar 2 ) ∼ 1 and (vertical bar H i≠j vertical bar 2 ) bF(vertical bar i - j vertical bar) parallel 1. We perform a regular expansion of the spectral form-factor K(τ) = 1 + bK 1 (τ) + b 2 K 2 (τ) + c in powers of b parallel 1 with the coefficients K m (τ) that take into account interaction of (m + 1) energy levels. To calculate K m (τ), we develop a diagrammatic technique which is based on the Trotter formula and on the combinatorial problem of graph edges colouring with (m + 1) colours. Expressions for K 1 (τ) and K 2 (τ) in terms of infinite series are found for a generic function F(vertical bar i - j vertical bar ) in the Gaussian orthogonal ensemble (GOE), the Gaussian unitary ensemble (GUE) and in the crossover between them (the almost unitary Gaussian ensemble). The Rosenzweig-Porter and power-law banded matrix ensembles are considered as examples

  9. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  10. Multiple Kernel Learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; AbdulJabbar, Mustafa Abdulmajeed

    2012-01-01

    Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of non-negative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation, which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.

  11. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  12. Loading factor and inclination parameter of diagonal type MHD generators

    International Nuclear Information System (INIS)

    Ishikawa, Motoo

    1979-01-01

    Regarding diagonal type MHD generators is studied the relation between the loading factor and inclination parameter which is required for attaining the maximum power density with a given electrical efficiency on the assumption of infinitely segmented electrodes. The average current density on electrodes is calculated against the Hall parameter, loading factor, and inclination parameter. The diagonal type generator is compared with Faraday type generator regarding the average current density. Decreasing the loading factor from inlet to outlet is appropriate to small size generators but increasing to large size generators. The inclination parameter had better decrease in both generators, being smaller for small generators than for large ones. The average current density on electrodes of diagonal type generators varies less with the loading factor than the Faraday type. In large size generators its value can become smaller compared with that of the Faraday type. (author)

  13. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement error of certain types and can also handle non-synchronous trading. It is the first estimator...... which has these three properties which are all essential for empirical work in this area. We derive the large sample asymptotics of this estimator and assess its accuracy using a Monte Carlo study. We implement the estimator on some US equity data, comparing our results to previous work which has used...

  14. The influence of maize kernel moisture on the sterilizing effect of gamma rays

    International Nuclear Information System (INIS)

    Khanymova, T.; Poloni, E.

    1980-01-01

    The influence of 4 levels of maize kernel moisture (16, 20, 25 and 30%) on gamma-ray sterilizing effect was studied and the after-effect of radiation on the microorganisms at short term storage was followed up. Maize kernels of the hybrid Knezha-36 produced in 1975 were used. Gamma-ray treatment of the kernels was effected by GUBEh-4000 irradiator at doses of 0.2 and 0.3 Mrad and after that they were stored for a month at 12 deg and 25 deg C and controlled moisture conditions. Surface and subepidermal infection of the kernels was determined immediately post irradiation and at the end of the experiment. Non-irradiated kernels were used as controls. Results indicated that the initial kernel moisture has a considerable influence on the sterilizing effect of gamma-rays at the rates used in the experiment and affects to a considerable extent the post-irradiation recovery of organisms. The speed of recovery was highest in the treatment with 30% moisture and lowest in the treatment with 16% kernel moisture. Irradiation of the kernels causes pronounced changes on the surface and subepidermal infection. This was due to the unequal radio resistance to the microbial components and to the modifying effect of the moisture holding capacity. The useful effect of maize kernel irradiation was more prolonged at 12 deg C than at 25 deg C

  15. An exploration of the influence of diagonal dissociation and moderate changes in speed on locomotor parameters in trotting horses

    Directory of Open Access Journals (Sweden)

    Sarah Jane Hobbs

    2016-06-01

    Full Text Available Background. Although the trot is described as a diagonal gait, contacts of the diagonal pairs of hooves are not usually perfectly synchronized. Although subtle, the timing dissociation between contacts of each diagonal pair could have consequences on gait dynamics and provide insight into the functional strategies employed. This study explores the mechanical effects of different diagonal dissociation patterns when speed was matched between individuals and how these effects link to moderate, natural changes in trotting speed. We anticipate that hind-first diagonal dissociation at contact increases with speed, diagonal dissociation at contact can reduce collision-based energy losses and predominant dissociation patterns will be evident within individuals. Methods. The study was performed in two parts: in the first 17 horses performed speed-matched trotting trials and in the second, five horses each performed 10 trotting trials that represented a range of individually preferred speeds. Standard motion capture provided kinematic data that were synchronized with ground reaction force (GRF data from a series of force plates. The data were analyzed further to determine temporal, speed, GRF, postural, mass distribution, moment, and collision dynamics parameters. Results. Fore-first, synchronous, and hind-first dissociations were found in horses trotting at (3.3 m/s ± 10%. In these speed-matched trials, mean centre of pressure (COP cranio-caudal location differed significantly between the three dissociation categories. The COP moved systematically and significantly (P = .001 from being more caudally located in hind-first dissociation (mean location = 0.41 ± 0.04 through synchronous (0.36 ± 0.02 to a more cranial location in fore-first dissociation (0.32 ± 0.02. Dissociation patterns were found to influence function, posture, and balance parameters. Over a moderate speed range, peak vertical forelimb GRF had a strong relationship with dissociation

  16. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  17. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  18. Adiabatic-connection fluctuation-dissipation DFT for the structural properties of solids - The renormalized ALDA and electron gas kernels

    DEFF Research Database (Denmark)

    Patrick, Christopher E.; Thygesen, Kristian Sommer

    2015-01-01

    the atomization energy of the H2 molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA's tendency to overestimate the magnitude of the correlation energy...

  19. Flame kernel generation and propagation in turbulent partially premixed hydrocarbon jet

    KAUST Repository

    Mansour, Mohy S.

    2014-04-23

    Flame development, propagation, stability, combustion efficiency, pollution formation, and overall system efficiency are affected by the early stage of flame generation defined as flame kernel. Studying the effects of turbulence and chemistry on the flame kernel propagation is the main aim of this work for natural gas (NG) and liquid petroleum gas (LPG). In addition the minimum ignition laser energy (MILE) has been investigated for both fuels. Moreover, the flame stability maps for both fuels are also investigated and analyzed. The flame kernels are generated using Nd:YAG pulsed laser and propagated in a partially premixed turbulent jet. The flow field is measured using 2-D PIV technique. Five cases have been selected for each fuel covering different values of Reynolds number within a range of 6100-14400, at a mean equivalence ratio of 2 and a certain level of partial premixing. The MILE increases by increasing the equivalence ratio. Near stoichiometric the energy density is independent on the jet velocity while in rich conditions it increases by increasing the jet velocity. The stability curves show four distinct regions as lifted, attached, blowout, and a fourth region either an attached flame if ignition occurs near the nozzle or lifted if ignition occurs downstream. LPG flames are more stable than NG flames. This is consistent with the higher values of the laminar flame speed of LPG. The flame kernel propagation speed is affected by both turbulence and chemistry. However, at low turbulence level chemistry effects are more pronounced while at high turbulence level the turbulence becomes dominant. LPG flame kernels propagate faster than those for NG flame. In addition, flame kernel extinguished faster in LPG fuel as compared to NG fuel. The propagation speed is likely to be consistent with the local mean equivalence ratio and its corresponding laminar flame speed. Copyright © Taylor & Francis Group, LLC.

  20. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  1. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  2. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  3. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  4. Spectral Sharpening of Color Sensors: Diagonal Color Constancy and Beyond

    OpenAIRE

    Vazquez-Corral, Javier; Bertalmío, Marcelo

    2014-01-01

    It has now been 20 years since the seminal work by Finlayson et al. on the use/nof spectral sharpening of sensors to achieve diagonal color constancy. Spectral sharpening is/nstill used today by numerous researchers for different goals unrelated to the original goal/nof diagonal color constancy e.g., multispectral processing, shadow removal, location of/nunique hues. This paper reviews the idea of spectral sharpening through the lens of what/nis known today in color constancy, describes the d...

  5. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  6. Quantum vacuum energy in graphs and billiards

    International Nuclear Information System (INIS)

    Kaplan, L.

    2010-01-01

    The vacuum (Casimir) energy in quantum field theory is a problem relevant both to new nanotechnology devices and to dark energy in cosmology. The crucial question is the dependence of the energy on the system geometry. Despite much progress since the first prediction of the Casimir effect in 1948 and its subsequent experimental verification in simple geometries, even the sign of the force in nontrivial situations is still a matter of controversy. Mathematically, vacuum energy fits squarely into the spectral theory of second-order self-adjoint elliptic linear differential operators. Specifically one promising approach is based on the small-t asymptotics of the cylinder kernel e -t√(H) , where H is the self-adjoint operator under study. In contrast with the well-studied heat kernel e -tH , the cylinder kernel depends in a non-local way on the geometry of the problem. We discuss some results by the Louisiana-Oklahoma-Texas collaboration on vacuum energy in model systems, including quantum graphs and two-dimensional cavities. The results may shed light on general questions, including the relationship between vacuum energy and periodic or closed classical orbits, and the contribution to vacuum energy of boundaries, edges, and corners.

  7. Off-Diagonal Deformations of Kerr Black Holes in Einstein and Modified Massive Gravity and Higher Dimensions

    CERN Document Server

    Gheorghiu, Tamara; Vacaru, Sergiu I

    2014-01-01

    We find general parameterizations for generic off-diagonal spacetime metrics and matter sources in general relativity, GR, and modified gravity theories when the field equations decouple with respect to certain types of nonholonomic frames of reference. This allows us to construct various classes of exact solutions when the coefficients of fundamental geometric/ physical objects depend on all spacetime coordinates via corresponding classes of generating and integration functions and/or constants. Such (modified) spacetimes can be with Killing and non-Killing symmetries, describe nonlinear vacuum configurations and effective polarizations of cosmological and interaction constants. Our method can be extended to higher dimensions which simplifies some proofs for imbedded and nonholonomically constrained four dimensional configurations. We reproduce the Kerr solution and show how to deform it nonholonomically into new classes of generic off-diagonal solutions depending on 3-8 spacetime coordinates. There are anal...

  8. Range-separated time-dependent density-functional theory with a frequency-dependent second-order Bethe-Salpeter correlation kernel

    Energy Technology Data Exchange (ETDEWEB)

    Rebolini, Elisa, E-mail: elisa.rebolini@kjemi.uio.no; Toulouse, Julien, E-mail: julien.toulouse@upmc.fr [Laboratoire de Chimie Théorique, Sorbonne Universités, UPMC Univ Paris 06, CNRS, 4 place Jussieu, F-75005 Paris (France)

    2016-03-07

    We present a range-separated linear-response time-dependent density-functional theory (TDDFT) which combines a density-functional approximation for the short-range response kernel and a frequency-dependent second-order Bethe-Salpeter approximation for the long-range response kernel. This approach goes beyond the adiabatic approximation usually used in linear-response TDDFT and aims at improving the accuracy of calculations of electronic excitation energies of molecular systems. A detailed derivation of the frequency-dependent second-order Bethe-Salpeter correlation kernel is given using many-body Green-function theory. Preliminary tests of this range-separated TDDFT method are presented for the calculation of excitation energies of the He and Be atoms and small molecules (H{sub 2}, N{sub 2}, CO{sub 2}, H{sub 2}CO, and C{sub 2}H{sub 4}). The results suggest that the addition of the long-range second-order Bethe-Salpeter correlation kernel overall slightly improves the excitation energies.

  9. The Kernel Estimation in Biosystems Engineering

    Directory of Open Access Journals (Sweden)

    Esperanza Ayuga Téllez

    2008-04-01

    Full Text Available In many fields of biosystems engineering, it is common to find works in which statistical information is analysed that violates the basic hypotheses necessary for the conventional forecasting methods. For those situations, it is necessary to find alternative methods that allow the statistical analysis considering those infringements. Non-parametric function estimation includes methods that fit a target function locally, using data from a small neighbourhood of the point. Weak assumptions, such as continuity and differentiability of the target function, are rather used than "a priori" assumption of the global target function shape (e.g., linear or quadratic. In this paper a few basic rules of decision are enunciated, for the application of the non-parametric estimation method. These statistical rules set up the first step to build an interface usermethod for the consistent application of kernel estimation for not expert users. To reach this aim, univariate and multivariate estimation methods and density function were analysed, as well as regression estimators. In some cases the models to be applied in different situations, based on simulations, were defined. Different biosystems engineering applications of the kernel estimation are also analysed in this review.

  10. Multiple Kernel Sparse Representation based Orthogonal Discriminative Projection and Its Cost-Sensitive Extension.

    Science.gov (United States)

    Zhang, Guoqing; Sun, Huaijiang; Xia, Guiyu; Sun, Quansen

    2016-07-07

    Sparse representation based classification (SRC) has been developed and shown great potential for real-world application. Based on SRC, Yang et al. [10] devised a SRC steered discriminative projection (SRC-DP) method. However, as a linear algorithm, SRC-DP cannot handle the data with highly nonlinear distribution. Kernel sparse representation-based classifier (KSRC) is a non-linear extension of SRC and can remedy the drawback of SRC. KSRC requires the use of a predetermined kernel function and selection of the kernel function and its parameters is difficult. Recently, multiple kernel learning for SRC (MKL-SRC) [22] has been proposed to learn a kernel from a set of base kernels. However, MKL-SRC only considers the within-class reconstruction residual while ignoring the between-class relationship, when learning the kernel weights. In this paper, we propose a novel multiple kernel sparse representation-based classifier (MKSRC), and then we use it as a criterion to design a multiple kernel sparse representation based orthogonal discriminative projection method (MK-SR-ODP). The proposed algorithm aims at learning a projection matrix and a corresponding kernel from the given base kernels such that in the low dimension subspace the between-class reconstruction residual is maximized and the within-class reconstruction residual is minimized. Furthermore, to achieve a minimum overall loss by performing recognition in the learned low-dimensional subspace, we introduce cost information into the dimensionality reduction method. The solutions for the proposed method can be efficiently found based on trace ratio optimization method [33]. Extensive experimental results demonstrate the superiority of the proposed algorithm when compared with the state-of-the-art methods.

  11. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  12. A diagonal address generator for a Josephson memory circuit

    International Nuclear Information System (INIS)

    Suzuki, H.; Hasuo, S.

    1987-01-01

    The authors propose that a diagonal D address generator, which is useful for a single flux quantum (SFQ) memory cell in the triple coincidence scheme, can be performed by a full adder circuit. For the purpose of evaluating the D address generator for a 16-kbit memory circuit, a 6-bit full adder circuit, using a current-steering flip-flop circuit, has been designed and fabricated with the lead-alloy process. Operating times for the address latch, carry generator, and sum generator were 150 ps, 250 ps/stage, and 1.4 ns, respectively. From these results, they estimate that the time necessary for the diagonal signal generation is 2.8 ns

  13. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  14. Large-scale production of UO2 kernels by sol–gel process at INET

    International Nuclear Information System (INIS)

    Hao, Shaochang; Ma, Jingtao; Zhao, Xingyu; Wang, Yang; Zhou, Xiangwen; Deng, Changsheng

    2014-01-01

    In order to supply elements (300,000 elements per year) for the Chinese pebble bed modular high temperature gas cooled reactor (HTR-PM), it is necessary to scale up the production of UO 2 kernels to 3–6 kgU per batch. The sol–gel process for preparation of UO 2 kernels have been improved and optimized at Institute of Nuclear and New Energy Technology (INET), Tsinghua University, PR China, and a whole set of facility was designed and constructed based on the process. This report briefly describes the main steps of the process, the key equipment and the production capacities of every step. Six batches of kernels for scale-up verification and four batches of kernels for fuel elements for in-pile irradiation tests have been successfully produced, respectively. The quality of the produced kernels meets the design requirements. The production capacity of the process reaches 3–6 kgU per batch

  15. Enumeration of diagonally colored Young diagrams

    OpenAIRE

    Gyenge, Ádám

    2015-01-01

    In this note we give a new proof of a closed formula for the multivariable generating series of diagonally colored Young diagrams. This series also describes the Euler characteristics of certain Nakajima quiver varieties. Our proof is a direct combinatorial argument, based on Andrews' work on generalized Frobenius partitions. We also obtain representations of these series in some particular cases as infinite products.

  16. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  17. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2012-02-01

    Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

  18. Kernel-based tests for joint independence

    DEFF Research Database (Denmark)

    Pfister, Niklas; Bühlmann, Peter; Schölkopf, Bernhard

    2018-01-01

    if the $d$ variables are jointly independent, as long as the kernel is characteristic. Based on an empirical estimate of dHSIC, we define three different non-parametric hypothesis tests: a permutation test, a bootstrap test and a test based on a Gamma approximation. We prove that the permutation test......We investigate the problem of testing whether $d$ random variables, which may or may not be continuous, are jointly (or mutually) independent. Our method builds on ideas of the two variable Hilbert-Schmidt independence criterion (HSIC) but allows for an arbitrary number of variables. We embed...... the $d$-dimensional joint distribution and the product of the marginals into a reproducing kernel Hilbert space and define the $d$-variable Hilbert-Schmidt independence criterion (dHSIC) as the squared distance between the embeddings. In the population case, the value of dHSIC is zero if and only...

  19. A diagonal approach for the catalytic transformation of carbon dioxide

    International Nuclear Information System (INIS)

    Gomes, Christophe

    2013-01-01

    Emissions of carbon dioxide are growing with the massive utilization of hydrocarbons for the production of energy and chemicals, resulting in a threatening global warming. The development of a more sustainable economy is urging to reduce the fingerprint of our current way of life. In this perspective, the organic chemistry industry will face important challenges in the next decades to replace hydrocarbons as a feedstock and use carbon-free energy sources. To tackle this challenge, new catalytic processes have been designed to convert CO 2 to high energy and value-added chemicals (formamides, N-heterocycles and methanol), using a novel diagonal approach. The energy efficiency of the new transformations is ensured by the utilization of mild reductants such as hydro-silanes and hydro-boranes. Importantly the reactions are promoted by organic catalysts, which circumvent the problems of cost, abundance and toxicity usually encountered with metal complexes. Based on theoretical and experimental studies, the understanding of the mechanisms involved in these reactions allowed the rational optimization of the catalysts as well as the reaction conditions, in order to match the requirements of sustainable chemistry. (author) [fr

  20. Adiabatic-connection fluctuation-dissipation DFT for the structural properties of solids—The renormalized ALDA and electron gas kernels

    Energy Technology Data Exchange (ETDEWEB)

    Patrick, Christopher E., E-mail: chripa@fysik.dtu.dk; Thygesen, Kristian S., E-mail: thygesen@fysik.dtu.dk [Center for Atomic-Scale Materials Design (CAMD), Department of Physics, Technical University of Denmark, DK—2800 Kongens Lyngby (Denmark)

    2015-09-14

    We present calculations of the correlation energies of crystalline solids and isolated systems within the adiabatic-connection fluctuation-dissipation formulation of density-functional theory. We perform a quantitative comparison of a set of model exchange-correlation kernels originally derived for the homogeneous electron gas (HEG), including the recently introduced renormalized adiabatic local-density approximation (rALDA) and also kernels which (a) satisfy known exact limits of the HEG, (b) carry a frequency dependence, or (c) display a 1/k{sup 2} divergence for small wavevectors. After generalizing the kernels to inhomogeneous systems through a reciprocal-space averaging procedure, we calculate the lattice constants and bulk moduli of a test set of 10 solids consisting of tetrahedrally bonded semiconductors (C, Si, SiC), ionic compounds (MgO, LiCl, LiF), and metals (Al, Na, Cu, Pd). We also consider the atomization energy of the H{sub 2} molecule. We compare the results calculated with different kernels to those obtained from the random-phase approximation (RPA) and to experimental measurements. We demonstrate that the model kernels correct the RPA’s tendency to overestimate the magnitude of the correlation energy whilst maintaining a high-accuracy description of structural properties.

  1. Probabilistic wind power forecasting based on logarithmic transformation and boundary kernel

    International Nuclear Information System (INIS)

    Zhang, Yao; Wang, Jianxue; Luo, Xu

    2015-01-01

    Highlights: • Quantitative information on the uncertainty of wind power generation. • Kernel density estimator provides non-Gaussian predictive distributions. • Logarithmic transformation reduces the skewness of wind power density. • Boundary kernel method eliminates the density leakage near the boundary. - Abstracts: Probabilistic wind power forecasting not only produces the expectation of wind power output, but also gives quantitative information on the associated uncertainty, which is essential for making better decisions about power system and market operations with the increasing penetration of wind power generation. This paper presents a novel kernel density estimator for probabilistic wind power forecasting, addressing two characteristics of wind power which have adverse impacts on the forecast accuracy, namely, the heavily skewed and double-bounded nature of wind power density. Logarithmic transformation is used to reduce the skewness of wind power density, which improves the effectiveness of the kernel density estimator in a transformed scale. Transformations partially relieve the boundary effect problem of the kernel density estimator caused by the double-bounded nature of wind power density. However, the case study shows that there are still some serious problems of density leakage after the transformation. In order to solve this problem in the transformed scale, a boundary kernel method is employed to eliminate the density leak at the bounds of wind power distribution. The improvement of the proposed method over the standard kernel density estimator is demonstrated by short-term probabilistic forecasting results based on the data from an actual wind farm. Then, a detailed comparison is carried out of the proposed method and some existing probabilistic forecasting methods

  2. Localization length and fractal dimension of band centre states for 1-d off-diagonal disordered systems

    International Nuclear Information System (INIS)

    Roman, E.; Wiecko, C.

    1985-08-01

    We study and characterize the eigenstates near the centre of the band of a 1-d tight binding model with off-diagonal disorder Wsub(T). We find a new exponent for the localization length lambda on an energy-dependent range of disorder Wsub(T). We correlate this feature with a change of structure of the wave-function displayed by the behaviour of its fractal dimensionality. (author)

  3. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  4. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  5. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  6. Omnibus risk assessment via accelerated failure time kernel machine modeling.

    Science.gov (United States)

    Sinnott, Jennifer A; Cai, Tianxi

    2013-12-01

    Integrating genomic information with traditional clinical risk factors to improve the prediction of disease outcomes could profoundly change the practice of medicine. However, the large number of potential markers and possible complexity of the relationship between markers and disease make it difficult to construct accurate risk prediction models. Standard approaches for identifying important markers often rely on marginal associations or linearity assumptions and may not capture non-linear or interactive effects. In recent years, much work has been done to group genes into pathways and networks. Integrating such biological knowledge into statistical learning could potentially improve model interpretability and reliability. One effective approach is to employ a kernel machine (KM) framework, which can capture nonlinear effects if nonlinear kernels are used (Scholkopf and Smola, 2002; Liu et al., 2007, 2008). For survival outcomes, KM regression modeling and testing procedures have been derived under a proportional hazards (PH) assumption (Li and Luan, 2003; Cai, Tonini, and Lin, 2011). In this article, we derive testing and prediction methods for KM regression under the accelerated failure time (AFT) model, a useful alternative to the PH model. We approximate the null distribution of our test statistic using resampling procedures. When multiple kernels are of potential interest, it may be unclear in advance which kernel to use for testing and estimation. We propose a robust Omnibus Test that combines information across kernels, and an approach for selecting the best kernel for estimation. The methods are illustrated with an application in breast cancer. © 2013, The International Biometric Society.

  7. Off-diagonal helicity density matrix elements for vector mesons produced in polarized e+e- processes

    International Nuclear Information System (INIS)

    Anselmino, M.; Murgia, F.; Quintairos, P.

    1999-04-01

    Final state q q-bar interactions give origin to non zero values of the off-diagonal element ρ 1,-1 of the helicity density matrix of vector mesons produced in e + e - annihilations, as confirmed by recent OPAL data on φ, D * and K * 's. New predictions are given for ρ 1,-1 of several mesons produced at large x E and small p T - i.e. collinear with the parent jet - in the annihilation of polarized 3 + and 3 - , the results depend strongly on the elementary dynamics and allow further non trivial tests of the standard model. (author)

  8. Comparison of electron dose-point kernels in water generated by the Monte Carlo codes, PENELOPE, GEANT4, MCNPX, and ETRAN.

    Science.gov (United States)

    Uusijärvi, Helena; Chouin, Nicolas; Bernhardt, Peter; Ferrer, Ludovic; Bardiès, Manuel; Forssell-Aronsson, Eva

    2009-08-01

    Point kernels describe the energy deposited at a certain distance from an isotropic point source and are useful for nuclear medicine dosimetry. They can be used for absorbed-dose calculations for sources of various shapes and are also a useful tool when comparing different Monte Carlo (MC) codes. The aim of this study was to compare point kernels calculated by using the mixed MC code, PENELOPE (v. 2006), with point kernels calculated by using the condensed-history MC codes, ETRAN, GEANT4 (v. 8.2), and MCNPX (v. 2.5.0). Point kernels for electrons with initial energies of 10, 100, 500, and 1 MeV were simulated with PENELOPE. Spherical shells were placed around an isotropic point source at distances from 0 to 1.2 times the continuous-slowing-down-approximation range (R(CSDA)). Detailed (event-by-event) simulations were performed for electrons with initial energies of less than 1 MeV. For 1-MeV electrons, multiple scattering was included for energy losses less than 10 keV. Energy losses greater than 10 keV were simulated in a detailed way. The point kernels generated were used to calculate cellular S-values for monoenergetic electron sources. The point kernels obtained by using PENELOPE and ETRAN were also used to calculate cellular S-values for the high-energy beta-emitter, 90Y, the medium-energy beta-emitter, 177Lu, and the low-energy electron emitter, 103mRh. These S-values were also compared with the Medical Internal Radiation Dose (MIRD) cellular S-values. The greatest differences between the point kernels (mean difference calculated for distances, electrons was 1.4%, 2.5%, and 6.9% for ETRAN, GEANT4, and MCNPX, respectively, compared to PENELOPE, if omitting the S-values when the activity was distributed on the cell surface for 10-keV electrons. The largest difference between the cellular S-values for the radionuclides, between PENELOPE and ETRAN, was seen for 177Lu (1.2%). There were large differences between the MIRD cellular S-values and those obtained from

  9. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  10. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  11. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  12. Isovector and flavor-diagonal charges of the nucleon

    Science.gov (United States)

    Gupta, Rajan; Bhattacharya, Tanmoy; Jang, Yong-Chull; Lin, Huey-Wen; Yoon, Boram

    2018-03-01

    We present an update on the status of the calculations of isovector and flavor-diagonal charges of the nucleon. The calculations of the isovector charges are being done using ten 2+1+1-flavor HISQ ensembles generated by the MILC collaboration covering the range of lattice spacings a ≈ 0.12, 0.09, 0.06 fm and pion masses Mπ ≈ 310, 220, 130 MeV. Excited-states contamination is controlled by using four-state fits to two-point correlators and three-states fits to the three-point correlators. The calculations of the disconnected diagrams needed to estimate flavor-diagonal charges are being done on a subset of six ensembles using the stocastic method. Final results are obtained using a simultaneous fit in M2π, the lattice spacing a and the finite volume parameter MπL keeping only the leading order corrections.

  13. Off-diagonal Bethe ansatz for exactly solvable models

    CERN Document Server

    Wang, Yupeng; Cao, Junpeng; Shi, Kangjie

    2015-01-01

    This book serves as an introduction of the off-diagonal Bethe Ansatz method, an analytic theory for the eigenvalue problem of quantum integrable models. It also presents some fundamental knowledge about quantum integrability and the algebraic Bethe Ansatz method. Based on the intrinsic properties of R-matrix and K-matrices, the book introduces a systematic method to construct operator identities of transfer matrix.  These identities allow one to establish the inhomogeneous T-Q relation formalism to obtain Bethe Ansatz equations and to retrieve corresponding eigenstates. Several longstanding models can thus be solved via this method since the lack of obvious reference states is made up. Both the exact results and the off-diagonal Bethe Ansatz method itself may have important applications in the fields of quantum field theory, low-dimensional condensed matter physics, statistical physics and cold atom systems.

  14. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  15. Theoretical study of the dependence of single impurity Anderson model on various parameters within distributional exact diagonalization method

    Science.gov (United States)

    Syaina, L. P.; Majidi, M. A.

    2018-04-01

    Single impurity Anderson model describes a system consisting of non-interacting conduction electrons coupled with a localized orbital having strongly interacting electrons at a particular site. This model has been proven successful to explain the phenomenon of metal-insulator transition through Anderson localization. Despite the well-understood behaviors of the model, little has been explored theoretically on how the model properties gradually evolve as functions of hybridization parameter, interaction energy, impurity concentration, and temperature. Here, we propose to do a theoretical study on those aspects of a single impurity Anderson model using the distributional exact diagonalization method. We solve the model Hamiltonian by randomly generating sampling distribution of some conducting electron energy levels with various number of occupying electrons. The resulting eigenvalues and eigenstates are then used to define the local single-particle Green function for each sampled electron energy distribution using Lehmann representation. Later, we extract the corresponding self-energy of each distribution, then average over all the distributions and construct the local Green function of the system to calculate the density of states. We repeat this procedure for various values of those controllable parameters, and discuss our results in connection with the criteria of the occurrence of metal-insulator transition in this system.

  16. Diagonal Pade approximations for initial value problems

    International Nuclear Information System (INIS)

    Reusch, M.F.; Ratzan, L.; Pomphrey, N.; Park, W.

    1987-06-01

    Diagonal Pade approximations to the time evolution operator for initial value problems are applied in a novel way to the numerical solution of these problems by explicitly factoring the polynomials of the approximation. A remarkable gain over conventional methods in efficiency and accuracy of solution is obtained. 20 refs., 3 figs., 1 tab

  17. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  18. Multi-subject Manifold Alignment of Functional Network Structures via Joint Diagonalization.

    Science.gov (United States)

    Nenning, Karl-Heinz; Kollndorfer, Kathrin; Schöpf, Veronika; Prayer, Daniela; Langs, Georg

    2015-01-01

    Functional magnetic resonance imaging group studies rely on the ability to establish correspondence across individuals. This enables location specific comparison of functional brain characteristics. Registration is often based on morphology and does not take variability of functional localization into account. This can lead to a loss of specificity, or confounds when studying diseases. In this paper we propose multi-subject functional registration by manifold alignment via coupled joint diagonalization. The functional network structure of each subject is encoded in a diffusion map, where functional relationships are decoupled from spatial position. Two-step manifold alignment estimates initial correspondences between functionally equivalent regions. Then, coupled joint diagonalization establishes common eigenbases across all individuals, and refines the functional correspondences. We evaluate our approach on fMRI data acquired during a language paradigm. Experiments demonstrate the benefits in matching accuracy achieved by coupled joint diagonalization compared to previously proposed functional alignment approaches, or alignment based on structural correspondences.

  19. Method for calculating anisotropic neutron transport using scattering kernel without polynomial expansion

    International Nuclear Information System (INIS)

    Takahashi, Akito; Yamamoto, Junji; Ebisuya, Mituo; Sumita, Kenji

    1979-01-01

    A new method for calculating the anisotropic neutron transport is proposed for the angular spectral analysis of D-T fusion reactor neutronics. The method is based on the transport equation with new type of anisotropic scattering kernels formulated by a single function I sub(i) (μ', μ) instead of polynomial expansion, for instance, Legendre polynomials. In the calculation of angular flux spectra by using scattering kernels with the Legendre polynomial expansion, we often observe the oscillation with negative flux. But in principle this oscillation disappears by this new method. In this work, we discussed anisotropic scattering kernels of the elastic scattering and the inelastic scatterings which excite discrete energy levels. The other scatterings were included in isotropic scattering kernels. An approximation method, with use of the first collision source written by the I sub(i) (μ', μ) function, was introduced to attenuate the ''oscillations'' when we are obliged to use the scattering kernels with the Legendre polynomial expansion. Calculated results with this approximation showed remarkable improvement for the analysis of the angular flux spectra in a slab system of lithium metal with the D-T neutron source. (author)

  20. Quantized kernel least mean square algorithm.

    Science.gov (United States)

    Chen, Badong; Zhao, Songlin; Zhu, Pingping; Príncipe, José C

    2012-01-01

    In this paper, we propose a quantization approach, as an alternative of sparsification, to curb the growth of the radial basis function structure in kernel adaptive filtering. The basic idea behind this method is to quantize and hence compress the input (or feature) space. Different from sparsification, the new approach uses the "redundant" data to update the coefficient of the closest center. In particular, a quantized kernel least mean square (QKLMS) algorithm is developed, which is based on a simple online vector quantization method. The analytical study of the mean square convergence has been carried out. The energy conservation relation for QKLMS is established, and on this basis we arrive at a sufficient condition for mean square convergence, and a lower and upper bound on the theoretical value of the steady-state excess mean square error. Static function estimation and short-term chaotic time-series prediction examples are presented to demonstrate the excellent performance.

  1. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  2. Diagonalizing sensing matrix of broadband RSE

    International Nuclear Information System (INIS)

    Sato, Shuichi; Kokeyama, Keiko; Kawazoe, Fumiko; Somiya, Kentaro; Kawamura, Seiji

    2006-01-01

    For a broadband-operated RSE interferometer, a simple and smart length sensing and control scheme was newly proposed. The sensing matrix could be diagonal, owing to a simple allocation of two RF modulations and to a macroscopic displacement of cavity mirrors, which cause a detuning of the RF modulation sidebands. In this article, the idea of the sensing scheme and an optimization of the relevant parameters will be described

  3. Analytic scattering kernels for neutron thermalization studies

    International Nuclear Information System (INIS)

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  4. Straight-chain halocarbon forming fluids for TRISO fuel kernel production – Tests with yttria-stabilized zirconia microspheres

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M.P. [Nuclear Science and Engineering Program, Metallurgical and Materials Engineering Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Nuclear Science and Engineering Program, Metallurgical and Materials Engineering Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Metallurgical and Materials Engineering Department, Colorado Center for Advanced Ceramics, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Braley, J.C. [Nuclear Science and Engineering Program, Chemistry and Geochemistry Department, Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States)

    2015-03-15

    Highlights: • YSZ TRISO kernels formed in three alternative, non-hazardous forming fluids. • Kernels characterized for size, shape, pore/grain size, density, and composition. • Bromotetradecane is suitable for further investigation with uranium-based precursor. - Abstract: Current methods of TRISO fuel kernel production in the United States use a sol–gel process with trichloroethylene (TCE) as the forming fluid. After contact with radioactive materials, the spent TCE becomes a mixed hazardous waste, and high costs are associated with its recycling or disposal. Reducing or eliminating this mixed waste stream would not only benefit the environment, but would also enhance the economics of kernel production. Previous research yielded three candidates for testing as alternatives to TCE: 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane. This study considers the production of yttria-stabilized zirconia (YSZ) kernels in silicone oil and the three chosen alternative formation fluids, with subsequent characterization of the produced kernels and used forming fluid. Kernels formed in silicone oil and bromotetradecane were comparable to those produced by previous kernel production efforts, while those produced in chlorooctadecane and iodododecane experienced gelation issues leading to poor kernel formation and geometry.

  5. Short range part of the NN interaction: Equivalent local potentials from quark exchange kernels

    International Nuclear Information System (INIS)

    Suzuk, Y.; Hecht, K.T.

    1983-01-01

    To focus on the nature of the short range part of the NN interaction, the intrinsically nonlocal interaction among the quark constituents of colorless nucleons is converted to an equivalent local potential using resonating group kernels which can be evaluated in analytic form. The WKB approximation based on the Wigner transform of the nonlocal kernels has been used to construct the equivalent potentials without recourse to the long range part of the NN interaction. The relative importance of the various components of the exchange kernels can be examined: The results indicate the importance of the color magnetic part of the exchange kernel for the repulsive part in the (ST) = (10), (01) channels, in particular since the energy dependence of the effective local potentials seems to be set by this term. Large cancellations of color Coulombic and quark confining contributions, together with the kinetic energy and norm exchange terms, indicate that the exact nature of the equivalent local potential may be sensitive to the details of the parametrization of the underlying quark-quark interaction. The equivalent local potentials show some of the characteristics of the phenomenological short range terms of the Paris potential

  6. Diagonalization of bosonic quadratic Hamiltonians by Bogoliubov transformations

    DEFF Research Database (Denmark)

    Nam, Phan Thanh; Napiorkowski, Marcin; Solovej, Jan Philip

    2016-01-01

    We provide general conditions for which bosonic quadratic Hamiltonians on Fock spaces can be diagonalized by Bogoliubov transformations. Our results cover the case when quantum systems have infinite degrees of freedom and the associated one-body kinetic and paring operators are unbounded. Our...

  7. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  8. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  9. Modeling animal-vehicle collisions using diagonal inflated bivariate Poisson regression.

    Science.gov (United States)

    Lao, Yunteng; Wu, Yao-Jan; Corey, Jonathan; Wang, Yinhai

    2011-01-01

    Two types of animal-vehicle collision (AVC) data are commonly adopted for AVC-related risk analysis research: reported AVC data and carcass removal data. One issue with these two data sets is that they were found to have significant discrepancies by previous studies. In order to model these two types of data together and provide a better understanding of highway AVCs, this study adopts a diagonal inflated bivariate Poisson regression method, an inflated version of bivariate Poisson regression model, to fit the reported AVC and carcass removal data sets collected in Washington State during 2002-2006. The diagonal inflated bivariate Poisson model not only can model paired data with correlation, but also handle under- or over-dispersed data sets as well. Compared with three other types of models, double Poisson, bivariate Poisson, and zero-inflated double Poisson, the diagonal inflated bivariate Poisson model demonstrates its capability of fitting two data sets with remarkable overlapping portions resulting from the same stochastic process. Therefore, the diagonal inflated bivariate Poisson model provides researchers a new approach to investigating AVCs from a different perspective involving the three distribution parameters (λ(1), λ(2) and λ(3)). The modeling results show the impacts of traffic elements, geometric design and geographic characteristics on the occurrences of both reported AVC and carcass removal data. It is found that the increase of some associated factors, such as speed limit, annual average daily traffic, and shoulder width, will increase the numbers of reported AVCs and carcass removals. Conversely, the presence of some geometric factors, such as rolling and mountainous terrain, will decrease the number of reported AVCs. Published by Elsevier Ltd.

  10. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  11. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  12. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  13. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  14. Low-energy electron dose-point kernel simulations using new physics models implemented in Geant4-DNA

    Energy Technology Data Exchange (ETDEWEB)

    Bordes, Julien, E-mail: julien.bordes@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France); Incerti, Sébastien, E-mail: incerti@cenbg.in2p3.fr [Université de Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Lampe, Nathanael, E-mail: nathanael.lampe@gmail.com [Université de Bordeaux, CENBG, UMR 5797, F-33170 Gradignan (France); CNRS, IN2P3, CENBG, UMR 5797, F-33170 Gradignan (France); Bardiès, Manuel, E-mail: manuel.bardies@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France); Bordage, Marie-Claude, E-mail: marie-claude.bordage@inserm.fr [CRCT, UMR 1037 INSERM, Université Paul Sabatier, F-31037 Toulouse (France); UMR 1037, CRCT, Université Toulouse III-Paul Sabatier, F-31037 (France)

    2017-05-01

    When low-energy electrons, such as Auger electrons, interact with liquid water, they induce highly localized ionizing energy depositions over ranges comparable to cell diameters. Monte Carlo track structure (MCTS) codes are suitable tools for performing dosimetry at this level. One of the main MCTS codes, Geant4-DNA, is equipped with only two sets of cross section models for low-energy electron interactions in liquid water (“option 2” and its improved version, “option 4”). To provide Geant4-DNA users with new alternative physics models, a set of cross sections, extracted from CPA100 MCTS code, have been added to Geant4-DNA. This new version is hereafter referred to as “Geant4-DNA-CPA100”. In this study, “Geant4-DNA-CPA100” was used to calculate low-energy electron dose-point kernels (DPKs) between 1 keV and 200 keV. Such kernels represent the radial energy deposited by an isotropic point source, a parameter that is useful for dosimetry calculations in nuclear medicine. In order to assess the influence of different physics models on DPK calculations, DPKs were calculated using the existing Geant4-DNA models (“option 2” and “option 4”), newly integrated CPA100 models, and the PENELOPE Monte Carlo code used in step-by-step mode for monoenergetic electrons. Additionally, a comparison was performed of two sets of DPKs that were simulated with “Geant4-DNA-CPA100” – the first set using Geant4′s default settings, and the second using CPA100′s original code default settings. A maximum difference of 9.4% was found between the Geant4-DNA-CPA100 and PENELOPE DPKs. Between the two Geant4-DNA existing models, slight differences, between 1 keV and 10 keV were observed. It was highlighted that the DPKs simulated with the two Geant4-DNA’s existing models were always broader than those generated with “Geant4-DNA-CPA100”. The discrepancies observed between the DPKs generated using Geant4-DNA’s existing models and “Geant4-DNA-CPA100” were

  15. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  16. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  17. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  18. Briquetting of Palm Kernel Shell | Ugwu | Journal of Applied ...

    African Journals Online (AJOL)

    In several developing countries, briquettes from agricultural residues contribute significantly to the energy mix especially for small scale and household requirements. In this work, briquettes were produced from Palm kernel shell. This was achieved by carbonising the shell to get the charcoal followed by the pulverization of ...

  19. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Yokota, Rio; Keyes, David E.

    2017-01-01

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  20. Performance Evaluation of Computation and Communication Kernels of the Fast Multipole Method on Intel Manycore Architecture

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2017-07-31

    Manycore optimizations are essential for achieving performance worthy of anticipated exascale systems. Utilization of manycore chips is inevitable to attain the desired floating point performance of these energy-austere systems. In this work, we revisit ExaFMM, the open source Fast Multiple Method (FMM) library, in light of highly tuned shared-memory parallelization and detailed performance analysis on the new highly parallel Intel manycore architecture, Knights Landing (KNL). We assess scalability and performance gain using task-based parallelism of the FMM tree traversal. We also provide an in-depth analysis of the most computationally intensive part of the traversal kernel (i.e., the particle-to-particle (P2P) kernel), by comparing its performance across KNL and Broadwell architectures. We quantify different configurations that exploit the on-chip 512-bit vector units within different task-based threading paradigms. MPI communication-reducing and NUMA-aware approaches for the FMM’s global tree data exchange are examined with different cluster modes of KNL. By applying several algorithm- and architecture-aware optimizations for FMM, we show that the N-Body kernel on 256 threads of KNL achieves on average 2.8× speedup compared to the non-vectorized version, whereas on 56 threads of Broadwell, it achieves on average 2.9× speedup. In addition, the tree traversal kernel on KNL scales monotonically up to 256 threads with task-based programming models. The MPI-based communication-reducing algorithms show expected improvements of the data locality across the KNL on-chip network.

  1. Thermoelectric behavior of conducting polymers: On the possibility of off-diagonal thermoelectricity

    Energy Technology Data Exchange (ETDEWEB)

    Mateeva, N; Niculescu, H; Schlenoff, J; Testardi, L

    1997-07-01

    Non-cubic materials, when structurally aligned, possess sufficient anisotropy to exhibit thermoelectric effects where the electrical and thermal currents are orthogonal (off-diagonal thermoelectricity). The authors discuss the benefits of this form of thermoelectricity for devices and describe a search for suitable properties in the air-stable conducting polymers polyaniline and polypyrrole. They find the simple and general correlation that the logarithm of the electrical conductivity scales linearly with the Seebeck coefficient on doping but with proportionality in excess of the conventional prediction for thermoelectricity. The correlation is unexpected in its universality and unfavorable for thermoelectric applications. A simple model suggests that mobile charges of both signs exist in these polymers, and this leads to reduced thermoelectric efficiency. They also briefly discuss non air-stable polyacetylene, where ambipolar transport does not appear to occur, and where properties seem more favorable for thermoelectricity.

  2. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  3. Linked-cluster formulation of electron-hole interaction kernel in real-space representation without using unoccupied states.

    Science.gov (United States)

    Bayne, Michael G; Scher, Jeremy A; Ellis, Benjamin H; Chakraborty, Arindam

    2018-05-21

    Electron-hole or quasiparticle representation plays a central role in describing electronic excitations in many-electron systems. For charge-neutral excitation, the electron-hole interaction kernel is the quantity of interest for calculating important excitation properties such as optical gap, optical spectra, electron-hole recombination and electron-hole binding energies. The electron-hole interaction kernel can be formally derived from the density-density correlation function using both Green's function and TDDFT formalism. The accurate determination of the electron-hole interaction kernel remains a significant challenge for precise calculations of optical properties in the GW+BSE formalism. From the TDDFT perspective, the electron-hole interaction kernel has been viewed as a path to systematic development of frequency-dependent exchange-correlation functionals. Traditional approaches, such as MBPT formalism, use unoccupied states (which are defined with respect to Fermi vacuum) to construct the electron-hole interaction kernel. However, the inclusion of unoccupied states has long been recognized as the leading computational bottleneck that limits the application of this approach for larger finite systems. In this work, an alternative derivation that avoids using unoccupied states to construct the electron-hole interaction kernel is presented. The central idea of this approach is to use explicitly correlated geminal functions for treating electron-electron correlation for both ground and excited state wave functions. Using this ansatz, it is derived using both diagrammatic and algebraic techniques that the electron-hole interaction kernel can be expressed only in terms of linked closed-loop diagrams. It is proved that the cancellation of unlinked diagrams is a consequence of linked-cluster theorem in real-space representation. The electron-hole interaction kernel derived in this work was used to calculate excitation energies in many-electron systems and results

  4. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  5. Optimizing Multiple Kernel Learning for the Classification of UAV Data

    Directory of Open Access Journals (Sweden)

    Caroline M. Gevaert

    2016-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing high-quality orthoimagery and 3D information in the form of point clouds at a relatively low cost. Their increasing popularity stresses the necessity of understanding which algorithms are especially suited for processing the data obtained from UAVs. The features that are extracted from the point cloud and imagery have different statistical characteristics and can be considered as heterogeneous, which motivates the use of Multiple Kernel Learning (MKL for classification problems. In this paper, we illustrate the utility of applying MKL for the classification of heterogeneous features obtained from UAV data through a case study of an informal settlement in Kigali, Rwanda. Results indicate that MKL can achieve a classification accuracy of 90.6%, a 5.2% increase over a standard single-kernel Support Vector Machine (SVM. A comparison of seven MKL methods indicates that linearly-weighted kernel combinations based on simple heuristics are competitive with respect to computationally-complex, non-linear kernel combination methods. We further underline the importance of utilizing appropriate feature grouping strategies for MKL, which has not been directly addressed in the literature, and we propose a novel, automated feature grouping method that achieves a high classification accuracy for various MKL methods.

  6. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  7. Off-diagonal Bethe ansatz solution of the XXX spin chain with arbitrary boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Junpeng [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China); Yang, Wen-Li, E-mail: wlyang@nwu.edu.cn [Institute of Modern Physics, Northwest University, Xian 710069 (China); Shi, Kangjie [Institute of Modern Physics, Northwest University, Xian 710069 (China); Wang, Yupeng, E-mail: yupeng@iphy.ac.cn [Beijing National Laboratory for Condensed Matter Physics, Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China)

    2013-10-01

    Employing the off-diagonal Bethe ansatz method proposed recently by the present authors, we exactly diagonalize the XXX spin chain with arbitrary boundary fields. By constructing a functional relation between the eigenvalues of the transfer matrix and the quantum determinant, the associated T–Q relation and the Bethe ansatz equations are derived.

  8. Off-diagonal Bethe ansatz solution of the XXX spin chain with arbitrary boundary conditions

    International Nuclear Information System (INIS)

    Cao, Junpeng; Yang, Wen-Li; Shi, Kangjie; Wang, Yupeng

    2013-01-01

    Employing the off-diagonal Bethe ansatz method proposed recently by the present authors, we exactly diagonalize the XXX spin chain with arbitrary boundary fields. By constructing a functional relation between the eigenvalues of the transfer matrix and the quantum determinant, the associated T–Q relation and the Bethe ansatz equations are derived

  9. Support vector machine with a Pearson VII function kernel for discriminating halophilic and non-halophilic proteins.

    Science.gov (United States)

    Zhang, Guangya; Ge, Huihua

    2013-10-01

    Understanding of proteins adaptive to hypersaline environment and identifying them is a challenging task and would help to design stable proteins. Here, we have systematically analyzed the normalized amino acid compositions of 2121 halophilic and 2400 non-halophilic proteins. The results showed that halophilic protein contained more Asp at the expense of Lys, Ile, Cys and Met, fewer small and hydrophobic residues, and showed a large excess of acidic over basic amino acids. Then, we introduce a support vector machine method to discriminate the halophilic and non-halophilic proteins, by using a novel Pearson VII universal function based kernel. In the three validation check methods, it achieved an overall accuracy of 97.7%, 91.7% and 86.9% and outperformed other machine learning algorithms. We also address the influence of protein size on prediction accuracy and found the worse performance for small size proteins might be some significant residues (Cys and Lys) were missing in the proteins. Copyright © 2013 The Authors. Published by Elsevier Ltd.. All rights reserved.

  10. Strength decoupling from the giant dipole resonance upon diagonalizing a Gaussian force and a delta-force on a particle-hole basis

    International Nuclear Information System (INIS)

    Csernai, L.P.; Zimanyi, J.; Gyarmati, B.; Lovas, R.G.

    1978-01-01

    The finite-range Gaussian force and delta-force have been diagonalized in a basis of 27 particle-hole states with Jsup(π)=1 - in 116 Sn. Depending on the range of the force, 3.9-7.1% of the total transition rate has been found in the 6-9 MeV excitation energy region, which comprises the unperturbed energies of the basis states containing neutron threshold states. (Auth.)

  11. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  12. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  13. A scatter model for fast neutron beams using convolution of diffusion kernels

    International Nuclear Information System (INIS)

    Moyers, M.F.; Horton, J.L.; Boyer, A.L.

    1988-01-01

    A new model is proposed to calculate dose distributions in materials irradiated with fast neutron beams. Scattered neutrons are transported away from the point of production within the irradiated material in the forward, lateral and backward directions, while recoil protons are transported in the forward and lateral directions. The calculation of dose distributions, such as for radiotherapy planning, is accomplished by convolving a primary attenuation distribution with a diffusion kernel. The primary attenuation distribution may be quickly calculated for any given set of beam and material conditions as it describes only the magnitude and distribution of first interaction sites. The calculation of energy diffusion kernels is very time consuming but must be calculated only once for a given energy. Energy diffusion distributions shown in this paper have been calculated using a Monte Carlo type of program. To decrease beam calculation time, convolutions are performed using a Fast Fourier Transform technique. (author)

  14. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  15. Reciprocity relation for multichannel coupling kernels

    International Nuclear Information System (INIS)

    Cotanch, S.R.; Satchler, G.R.

    1981-01-01

    Assuming time-reversal invariance of the many-body Hamiltonian, it is proven that the kernels in a general coupled-channels formulation are symmetric, to within a specified spin-dependent phase, under the interchange of channel labels and coordinates. The theorem is valid for both Hermitian and suitably chosen non-Hermitian Hamiltonians which contain complex effective interactions. While of direct practical consequence for nuclear rearrangement reactions, the reciprocity relation is also appropriate for other areas of physics which involve coupled-channels analysis

  16. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  17. A kernel for open source drug discovery in tropical diseases.

    Science.gov (United States)

    Ortí, Leticia; Carbajo, Rodrigo J; Pieper, Ursula; Eswar, Narayanan; Maurer, Stephen M; Rai, Arti K; Taylor, Ginger; Todd, Matthew H; Pineda-Lucena, Antonio; Sali, Andrej; Marti-Renom, Marc A

    2009-01-01

    Conventional patent-based drug development incentives work badly for the developing world, where commercial markets are usually small to non-existent. For this reason, the past decade has seen extensive experimentation with alternative R&D institutions ranging from private-public partnerships to development prizes. Despite extensive discussion, however, one of the most promising avenues-open source drug discovery-has remained elusive. We argue that the stumbling block has been the absence of a critical mass of preexisting work that volunteers can improve through a series of granular contributions. Historically, open source software collaborations have almost never succeeded without such "kernels". HERE, WE USE A COMPUTATIONAL PIPELINE FOR: (i) comparative structure modeling of target proteins, (ii) predicting the localization of ligand binding sites on their surfaces, and (iii) assessing the similarity of the predicted ligands to known drugs. Our kernel currently contains 143 and 297 protein targets from ten pathogen genomes that are predicted to bind a known drug or a molecule similar to a known drug, respectively. The kernel provides a source of potential drug targets and drug candidates around which an online open source community can nucleate. Using NMR spectroscopy, we have experimentally tested our predictions for two of these targets, confirming one and invalidating the other. The TDI kernel, which is being offered under the Creative Commons attribution share-alike license for free and unrestricted use, can be accessed on the World Wide Web at http://www.tropicaldisease.org. We hope that the kernel will facilitate collaborative efforts towards the discovery of new drugs against parasites that cause tropical diseases.

  18. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  19. INVESTIGATION OF THE EFFECTS OF DIFFERENT EDGE JOINT ELEMENTS ON DIAGONAL TENSILE STRENGTH IN FURNITURE EDGE JOINTS

    Directory of Open Access Journals (Sweden)

    Arif GÜRAY

    2002-01-01

    Full Text Available In this work, the diagonal tensile strength of furniture edge joints such as wooden dowel, minifix, and alyan screw was investigated in panel-constructed boards for Suntalam and MDF Lam. For this purpose, a diagonal tensile strength test was applied to the 72 samples. According to the results, the maximum diagonal tensile strength was found to be in MDF Lam boards that jointed with alyan screw.

  20. Exact diagonalization library for quantum electron models

    Science.gov (United States)

    Iskakov, Sergei; Danilov, Michael

    2018-04-01

    We present an exact diagonalization C++ template library (EDLib) for solving quantum electron models, including the single-band finite Hubbard cluster and the multi-orbital impurity Anderson model. The observables that can be computed using EDLib are single particle Green's functions and spin-spin correlation functions. This code provides three different types of Hamiltonian matrix storage that can be chosen based on the model.

  1. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  2. Optimization of palm kernel shell torrefaction to produce energy densified bio-coal

    International Nuclear Information System (INIS)

    Asadullah, Mohammad; Adi, Ag Mohammad; Suhada, Nurul; Malek, Nur Hanina; Saringat, Muhammad Ilmam; Azdarpour, Amin

    2014-01-01

    Highlights: • Around 70% of bio-coal yield was achieved from PKS torrefaction at 300 °C. • The higher heating value of optimized bio-coal was 24.5 MJ/kg. • Around 94% of thermal yield was achieved with 70% mass yield. • The grindability of optimized bio-coal was comparable with coal. - Abstract: Biomass torrefaction is a thermal process, which is similar to a mild form of pyrolysis at temperatures ranging from 200 to 320 °C to produce energy densified solid fuel. The torrefied biomass is almost equivalent to coal and is termed as bio-coal. During torrefaction, highly volatile fraction of biomass including moisture and hemicellulose are released as vapors, providing energy enriched solid fuel, which is hydrophobic and brittle. In this study, bio-coal is produced from palm kernel shell (PKS) in a batch feeding reactor. The operating variables such as temperature, residence time and swiping gas flow rate are optimized. Around 73% yield of bio-coal with calorific value of 24.5 MJ/kg was achieved at optimum temperature 300 °C with residence time of 20 min and nitrogen gas flow rate of 300 mL/min. The thermal yield was calculated to be maximum of 94% for the bio-coal produced at 300 °C. The temperature and residence time of torrefaction are found to be the most sensitive parameters in terms of product yield, calorific value and thermal yield of bio-coal

  3. Localization for off-diagonal disorder and for continuous Schroedinger operators

    International Nuclear Information System (INIS)

    Delyon, F.; Souillard, B.; Simon, B.

    1987-01-01

    We extend the proof of localization by Delyon, Levy, and Souillard to accommodate the Anderson model with off-diagonal disorder and the continuous Schroedinger equation with a random potential. (orig.)

  4. A Stochastic Proof of the Resonant Scattering Kernel and its Applications for Gen IV Reactors Type

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Broeders, C.H.M.; Lohnert, G.

    2008-01-01

    Monte Carlo codes such as MCNP are widely accepted as almost-reference for reactor analysis. The Monte Carlo Code should therefore use as few as possible approximations in order to produce 'experimental-level' calculations. In this study we deal with one of the most problematic approximations done in MCNP in which the resonances are ignored for the secondary neutron energy distribution, namely the change of the energy and angular direction of the neutron after interaction with a heavy isotope with pronounced resonances. The endeavour of exploiting the influence of the resonances on the scattering kernel goes back to 1944 where E. Wigner and J. Wilkins developed the first temperature dependent scattering kernel. However only in 1998, the full analytical solution for the double differential resonant dependent scattering kernel was suggested by W. Rothenstein and R. Dagan. An independent stochastic approach is presented for the first time to confirm the above analytical kernel with a complete different methodology. Moreover, by manipulating in a subtle manner the scattering subroutine COLIDN of MCNP, it is proven that this very subroutine is, to some extent, inappropriate as well as the relevant explanation in the MCNP manual. The impact of this improved resonance dependent scattering kernel on diverse types of reactors, in particular for the Generation IV innovative core design HTR, is shown to be significant. (authors)

  5. Non conventional energy sources and energy conservation

    International Nuclear Information System (INIS)

    Bueno M, F.

    1995-01-01

    Geographically speaking, Mexico is in an enviable position. Sun, water, biomass and geothermal fields main non conventional energy sources with commercial applications, are presents and in some cases plentiful in national territory. Moreover the coastal tidal power which is in research stage in several countries. Non conventional energy sources are an alternative which allow us to reduce the consumption of hydrocarbons or any other type of primary energetic, are not by oneself choices for the energy conservation, but energy replacements. At the beginning of this year, CONAE created the Direction of Non conventional Energy Sources, which main objective is to promote and impulse programs inclined towards the application of systems based in renewable energy sources. The research centers represent a technological and consultative support for the CONAE. They have an infrastructure developed along several years of continuous work. The non conventional energy sources will be a reality at the same time that their cost be equal or lower than the cost for the traditional generating systems. CONAE (National Commission for Energy Conservation). (Author)

  6. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  8. Analysis of Drude model using fractional derivatives without singular kernels

    Directory of Open Access Journals (Sweden)

    Jiménez Leonardo Martínez

    2017-11-01

    Full Text Available We report study exploring the fractional Drude model in the time domain, using fractional derivatives without singular kernels, Caputo-Fabrizio (CF, and fractional derivatives with a stretched Mittag-Leffler function. It is shown that the velocity and current density of electrons moving through a metal depend on both the time and the fractional order 0 < γ ≤ 1. Due to non-singular fractional kernels, it is possible to consider complete memory effects in the model, which appear neither in the ordinary model, nor in the fractional Drude model with Caputo fractional derivative. A comparison is also made between these two representations of the fractional derivatives, resulting a considered difference when γ < 0.8.

  9. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  10. Free Energy Contribution Analysis Using Response Kernel Approximation: Insights into the Acylation Reaction of a Beta-Lactamase.

    Science.gov (United States)

    Asada, Toshio; Ando, Kanta; Bandyopadhyay, Pradipta; Koseki, Shiro

    2016-09-08

    A widely applicable free energy contribution analysis (FECA) method based on the quantum mechanical/molecular mechanical (QM/MM) approximation using response kernel approaches has been proposed to investigate the influences of environmental residues and/or atoms in the QM region on the free energy profile. This method can evaluate atomic contributions to the free energy along the reaction path including polarization effects on the QM region within a dramatically reduced computational time. The rate-limiting step in the deactivation of the β-lactam antibiotic cefalotin (CLS) by β-lactamase was studied using this method. The experimentally observed activation barrier was successfully reproduced by free energy perturbation calculations along the optimized reaction path that involved activation by the carboxylate moiety in CLS. It was found that the free energy profile in the QM region was slightly higher than the isolated energy and that two residues, Lys67 and Lys315, as well as water molecules deeply influenced the QM atoms associated with the bond alternation reaction in the acyl-enzyme intermediate. These facts suggested that the surrounding residues are favorable for the reactant complex and prevent the intermediate from being too stabilized to proceed to the following deacylation reaction. We have demonstrated that the free energy contribution analysis should be a useful method to investigate enzyme catalysis and to facilitate intelligent molecular design.

  11. N-body quantum scattering theory in two Hilbert spaces. VII. Real-energy limits

    International Nuclear Information System (INIS)

    Chandler, C.; Gibson, A.G.

    1994-01-01

    A study is made of the real-energy limits of approximate solutions of the Chandler--Gibson equations, as well as the real-energy limits of the approximate equations themselves. It is proved that (1) the approximate time-independent transition operator T π (z) and an auxiliary operator M π (z), when restricted to finite energy intervals, are trace class operators and have limits in trace norm for almost all values of the real energy; (2) the basic dynamical equation that determines the operator M π (z), when restricted to the space of trace class operators, has a real-energy limit in trace norm for almost all values of the real energy; (3) the real-energy limit of M π (z) is a solution of the real-energy limit equation; (4) the diagonal (on-shell) elements of the kernels of the real-energy limit of T π (z) and of all solutions of the real-energy limit equation exactly equal the on-shell transition operator, implying that the real-energy limit equation uniquely determines the physical transition amplitude; and (5) a sequence of approximate on-shell transition operators converges strongly to the exact on-shell transition operator. These mathematically rigorous results are believed to be the most general of their type for nonrelativistic N-body quantum scattering theories

  12. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  13. Fast Approximate Joint Diagonalization Incorporating Weight Matrices

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Yeredor, A.

    2009-01-01

    Roč. 57, č. 3 (2009), s. 878-891 ISSN 1053-587X R&D Projects: GA MŠk 1M0572 Institutional research plan: CEZ:AV0Z10750506 Keywords : autoregressive processes * blind source separation * nonstationary random processes Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 2.212, year: 2009 http://library.utia.cas.cz/separaty/2009/SI/tichavsky-fast approximate joint diagonalization incorporating weight matrices.pdf

  14. Direct calculation of off-diagonal matrix elements

    International Nuclear Information System (INIS)

    Killingbeck, J P; Jolicard, G

    2011-01-01

    Gauss elimination is used in a sequence of calculations which give the squares of the off-diagonal matrix elements of x between quartic oscillator eigenstates, in a modification of the original sum rule approach of Tipping et al to the problem. New and more flexible methods are then devised and tested and are shown to permit the isolation and calculation of individual squared matrix elements of x and x 2 .

  15. Auto-associative Kernel Regression Model with Weighted Distance Metric for Instrument Drift Monitoring

    International Nuclear Information System (INIS)

    Shin, Ho Cheol; Park, Moon Ghu; You, Skin

    2006-01-01

    Recently, many on-line approaches to instrument channel surveillance (drift monitoring and fault detection) have been reported worldwide. On-line monitoring (OLM) method evaluates instrument channel performance by assessing its consistency with other plant indications through parametric or non-parametric models. The heart of an OLM system is the model giving an estimate of the true process parameter value against individual measurements. This model gives process parameter estimate calculated as a function of other plant measurements which can be used to identify small sensor drifts that would require the sensor to be manually calibrated or replaced. This paper describes an improvement of auto associative kernel regression (AAKR) by introducing a correlation coefficient weighting on kernel distances. The prediction performance of the developed method is compared with conventional auto-associative kernel regression

  16. Permuting sparse rectangular matrices into block-diagonal form

    Energy Technology Data Exchange (ETDEWEB)

    Aykanat, Cevdet; Pinar, Ali; Catalyurek, Umit V.

    2002-12-09

    This work investigates the problem of permuting a sparse rectangular matrix into block diagonal form. Block diagonal form of a matrix grants an inherent parallelism for the solution of the deriving problem, as recently investigated in the context of mathematical programming, LU factorization and QR factorization. We propose graph and hypergraph models to represent the nonzero structure of a matrix, which reduce the permutation problem to those of graph partitioning by vertex separator and hypergraph partitioning, respectively. Besides proposing the models to represent sparse matrices and investigating related combinatorial problems, we provide a detailed survey of relevant literature to bridge the gap between different societies, investigate existing techniques for partitioning and propose new ones, and finally present a thorough empirical study of these techniques. Our experiments on a wide range of matrices, using state-of-the-art graph and hypergraph partitioning tools MeTiS and PaT oH, revealed that the proposed methods yield very effective solutions both in terms of solution quality and run time.

  17. The effect of smelting time and composition of palm kernel shell charcoal reductant toward extractive Pomalaa nickel laterite ore in mini electric arc furnace

    Science.gov (United States)

    Sihotang, Iqbal Huda; Supriyatna, Yayat Iman; Ismail, Ika; Sulistijono

    2018-04-01

    Indonesia is a country that is rich in natural resources. Being a third country which has a nickel laterite ore in the world after New Caledonia and Philippines. However, the processing of nickel laterite ore to increase its levels in Indonesia is still lacking. In the processing of nickel laterite ore into metal, it can be processed by pyrometallurgy method that typically use coal as a reductant. However, coal is a non-renewable energy and have high enough levels of pollution. One potentially replace is the biomass, that is a renewable energy. Palm kernel shell are biomass that can be used as a reductant because it has a fairly high fix carbon content. This research aims to make nickel laterite ores become metal using palm kernel shell charcoal as reductant in mini electric arc furnace. The result show that the best smelting time of this research is 60 minutes with the best composition of the reductant is 2,000 gram.

  18. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  19. Images of a Bose-Einstein condensates: diagonal dynamical Bogoliubov vacuum

    International Nuclear Information System (INIS)

    Dziarmaga, J.; Sacha, K.; Karkuszewski, Z.

    2005-01-01

    Evolution of a Bose-Einstein condensate subject to a time-dependent external perturbation can be described by a time-dependent Bogoliubov theory: a condensate initially in its ground state evolves into a time-dependent excited state which can be formally written as a time-dependent Bogoliubov vacuum annihilated by time-dependent quasiparticle annihilation operators. We prove that any Bogoliubov vacuum can be brought to a diagonal form in a time-dependent orthonormal basis. This diagonal form is taylored for simulations of quantum measurements on excited condensates. As an example we work out a model of atomic interferometer where a trap potential is split in two parts by a potential barrier, and then atoms are released by opening the double-well trap potential. In the Gross-Pitaevskii approximation the released atoms give a high contrast interference pattern with repeatable position of interference fringes. In the two-mode tight-binding approximation the effect of phase diffusion makes the position of fringes fluctuate from experiment to experiment but every single realisation of experiment gives a high quality interference pattern. The time-dependent Bogoliubov theory is a more realistic description of the experiment which goes beyond both approximations. Using the diagonal time-dependent Bogoliubov vacuum we show that in addition to position fluctuations the interference pattern is also loosing its high quality contrast. (author)

  20. Revisiting the definition of local hardness and hardness kernel.

    Science.gov (United States)

    Polanco-Ramírez, Carlos A; Franco-Pérez, Marco; Carmona-Espíndola, Javier; Gázquez, José L; Ayers, Paul W

    2017-05-17

    An analysis of the hardness kernel and local hardness is performed to propose new definitions for these quantities that follow a similar pattern to the one that characterizes the quantities associated with softness, that is, we have derived new definitions for which the integral of the hardness kernel over the whole space of one of the variables leads to local hardness, and the integral of local hardness over the whole space leads to global hardness. A basic aspect of the present approach is that global hardness keeps its identity as the second derivative of energy with respect to the number of electrons. Local hardness thus obtained depends on the first and second derivatives of energy and electron density with respect to the number of electrons. When these derivatives are approximated by a smooth quadratic interpolation of energy, the expression for local hardness reduces to the one intuitively proposed by Meneses, Tiznado, Contreras and Fuentealba. However, when one combines the first directional derivatives with smooth second derivatives one finds additional terms that allow one to differentiate local hardness for electrophilic attack from the one for nucleophilic attack. Numerical results related to electrophilic attacks on substituted pyridines, substituted benzenes and substituted ethenes are presented to show the overall performance of the new definition.

  1. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  2. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  3. Investigation of tilted dose kernels for portal dose prediction in a-Si electronic portal imagers

    International Nuclear Information System (INIS)

    Chytyk, K.; McCurdy, B. M. C.

    2006-01-01

    The effect of beam divergence on dose calculation via Monte Carlo generated dose kernels was investigated in an amorphous silicon electronic portal imaging device (EPID). The flat-panel detector was simulated in EGSnrc with an additional 3.0 cm water buildup. The model included details of the detector's imaging cassette and the front cover upstream of it. To approximate the effect of the EPID's rear housing, a 2.1 cm air gap and 1.0 cm water slab were introduced into the simulation as equivalent backscatter material. Dose kernels were generated with an incident pencil beam of monoenergetic photons of energy 0.1, 2, 6, and 18 MeV. The orientation of the incident pencil beam was varied from 0 deg. to 14 deg. in 2 deg. increments. Dose was scored in the phosphor layer of the detector in both cylindrical (at 0 deg. ) and Cartesian (at 0 deg. -14 deg.) geometries. To reduce statistical fluctuations in the Cartesian geometry simulations at large radial distances from the incident pencil beam, the voxels were first averaged bilaterally about the pencil beam and then combined into concentric square rings of voxels. Profiles of the EPID dose kernels displayed increasing asymmetry with increasing angle and energy. A comparison of the superposition (tilted kernels) and convolution (parallel kernels) dose calculation methods via the χ-comparison test (a derivative of the γ-evaluation) in worst-case-scenario geometries demonstrated an agreement between the two methods within 0.0784 cm (one pixel width) distance-to-agreement and up to a 1.8% dose difference. More clinically typical field sizes and source-to-detector distances were also tested, yielding at most a 1.0% dose difference and the same distance-to-agreement. Therefore, the assumption of parallel dose kernels has less than a 1.8% dosimetric effect in extreme cases and less than a 1.0% dosimetric effect in most clinically relevant situations and should be suitable for most clinical dosimetric applications. The

  4. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  5. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  6. The relative effects of fuel concentration, residual-gas fraction, gas motion, spark energy and heat losses to the electrodes on flame-kernel development in a lean-burn spark ignition engine

    Energy Technology Data Exchange (ETDEWEB)

    Aleiferis, P.G.; Taylor, A.M.K.P. [Imperial College of Science, Technology and Medicine, London (United Kingdom). Dept. of Mechanical Engineering; Ishii, K. [Honda International Technical School, Saitama (Japan); Urata, Y. [Honda R and D Co., Ltd., Tochigi (Japan). Tochigi R and D Centre

    2004-04-01

    The potential of lean combustion for the reduction in exhaust emissions and fuel consumption in spark ignition engines has long been established. However, the operating range of lean-burn spark ignition engines is limited by the level of cyclic variability in the early-flame development stage that typically corresponds to the 0-5 per cent mass fraction burned duration. In the current study, the cyclic variations in early flame development were investigated in an optical stratified-charge spark ignition engine at conditions close to stoichiometry [air-to-fuel ratio (A/F) = 15] and to the lean limit of stable operation (A/F = 22). Flame images were acquired through either a pentroof window ('tumble plane' of view) or the piston crown ('swirl plane' of view) and these were processed to calculate the intra-cycle flame-kernel radius evolution. In order to quantify the relative effects of local fuel concentration, gas motion, spark-energy release and heat losses to the electrodes on the flame-kernel growth rate, a zero-dimensional flame-kernel growth model, in conjunction with a one-dimensional spark ignition model, was employed. Comparison of the calculated flame-radius evolutions with the experimental data suggested that a variation in A/F around the spark plug of {delta}(A/F) {approx} 4 or, in terms of equivalence ratio {phi}, a variation in {delta}{phi} {approx} 0.15 at most was large enough to account for 100 per cent of the observed cyclic variability in flame-kernel radius. A variation in the residual-gas fraction of about 20 per cent around the mean was found to account for up to 30 per cent of the variability in flame-kernel radius at the timing of 5 per cent mass fraction burned. The individual effect of 20 per cent variations in the 'mean' in-cylinder velocity at the spark plug at ignition timing was found to account for no more than 20 per cent of the measured cyclic variability in flame kernel radius. An individual effect of

  7. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  8. Spatial Modeling Of Infant Mortality Rate In South Central Timor Regency Using GWLR Method With Adaptive Bisquare Kernel And Gaussian Kernel

    Directory of Open Access Journals (Sweden)

    Teguh Prawono Sabat

    2017-08-01

    Full Text Available Geographically Weighted Logistic Regression (GWLR was regression model consider the spatial factor, which could be used to analyze the IMR. The number of Infant Mortality as big as 100 cases in 2015 or 12 per 1000 live birth in South Central Timor Regency. The aim of this study was to determine the best modeling of GWLR with fixed weighting function and Adaptive Gaussian Kernel in the case of infant mortality in South Central Timor District in 2015. The response variable (Y in this study was a case of infant mortality, while variable predictor was the percentage of neonatal first visit (KN1 (X1, the percentage of neonatal visit 3 times (Complete KN (X2, the percentage of pregnant get Fe tablet (X3, percentage of poor families pre prosperous (X4. This was a non-reactive study, which is a measurement which individuals surveyed did not realize that they are part of a study, with analysis unit in 32 sub-districts of South Central Timor Districts. Data analysis used open source program that was Excel, R program, Quantum GIS and GWR4. The best GWLR spatial modeling with Adaptive Gaussian Kernel weighting function, a global model parameters GWLR Adaptive Gaussian Kernel weighting function obtained by g (x = 0.941086 - 0,892506X4, GWLR local models with adaptive Kernel bisquare weighting function in the 13 Districts were obtained g(x = 0 − 0X4, factors that affect the cases of infant mortality in 13 sub-districts of South Central Timor Regency in 2015 was the percentage of poor families pre prosperous.

  9. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  10. A CLT on the SNR of Diagonally Loaded MVDR Filters

    Science.gov (United States)

    Rubio, Francisco; Mestre, Xavier; Hachem, Walid

    2012-08-01

    This paper studies the fluctuations of the signal-to-noise ratio (SNR) of minimum variance distorsionless response (MVDR) filters implementing diagonal loading in the estimation of the covariance matrix. Previous results in the signal processing literature are generalized and extended by considering both spatially as well as temporarily correlated samples. Specifically, a central limit theorem (CLT) is established for the fluctuations of the SNR of the diagonally loaded MVDR filter, under both supervised and unsupervised training settings in adaptive filtering applications. Our second-order analysis is based on the Nash-Poincar\\'e inequality and the integration by parts formula for Gaussian functionals, as well as classical tools from statistical asymptotic theory. Numerical evaluations validating the accuracy of the CLT confirm the asymptotic Gaussianity of the fluctuations of the SNR of the MVDR filter.

  11. Wave function continuity and the diagonal Born-Oppenheimer correction at conical intersections.

    Science.gov (United States)

    Meek, Garrett A; Levine, Benjamin G

    2016-05-14

    We demonstrate that though exact in principle, the expansion of the total molecular wave function as a sum over adiabatic Born-Oppenheimer (BO) vibronic states makes inclusion of the second-derivative nonadiabatic energy term near conical intersections practically problematic. In order to construct a well-behaved molecular wave function that has density at a conical intersection, the individual BO vibronic states in the summation must be discontinuous. When the second-derivative nonadiabatic terms are added to the Hamiltonian, singularities in the diagonal BO corrections (DBOCs) of the individual BO states arise from these discontinuities. In contrast to the well-known singularities in the first-derivative couplings at conical intersections, these singularities are non-integrable, resulting in undefined DBOC matrix elements. Though these singularities suggest that the exact molecular wave function may not have density at the conical intersection point, there is no physical basis for this constraint. Instead, the singularities are artifacts of the chosen basis of discontinuous functions. We also demonstrate that continuity of the total molecular wave function does not require continuity of the individual adiabatic nuclear wave functions. We classify nonadiabatic molecular dynamics methods according to the constraints placed on wave function continuity and analyze their formal properties. Based on our analysis, it is recommended that the DBOC be neglected when employing mixed quantum-classical methods and certain approximate quantum dynamical methods in the adiabatic representation.

  12. Surco diagonal en el lóbulo de la oreja: ¿signo de enfermedad arterial coronaria? Diagonal earlobe crease: a sign of coronary artery disease?

    Directory of Open Access Journals (Sweden)

    Sebastián B. Lamot

    2007-08-01

    Full Text Available El surco diagonal es un signo encontrado en el lóbulo de la oreja, que estaría relacionado con la enfermedad arterial coronaria. Nuestro objetivo fue estudiar la utilidad del signo. Se examinaron 104 pacientes (entre 30 y 80 años clasificados por sexo y edad. Cuarenta y nueve tenían enfermedad arterial coronaria diagnosticada por coronariografía (obstrucción > del 70% en una de las grandes arterias y/o gamagrafía de perfusión miocárdica con Talio 201 (defecto fijo. El grupo control estuvo compuesto por 55 pacientes (asintomáticos, con electrocardiograma normal. Los datos obtenidos fueron sensibilidad (61.2%, especificidad (78.2%, valor predictivo positivo de (71.4% y valor predictivo negativo (69.3%.. Observamos una relación significativa entre la presencia de surco diagonal y enfermedad arterial coronaria. Consideramos que este signo podría resultar de utilidad en la práctica clínica, fundamentalmente para los pacientes entre 30 y 60 años.The diagonal earlobe crease is a sign theorically related to coronary artery disease. The purpose of this study was to prove the usefulness of this sign. A total of 104 patients were examined (ages 30 to 80 grouped by age and sex. Forty nine of them were diagnosed of having coronary artery disease by coronary angiography (a 70% obstruction of one of the major arteries, and/or myocardial perfusion imaging with Thallium 201 (fixed defects. The control group included 55 patients (asymptomatic with normal electrocardiogram. Data here obtained included sensitivity (61.2%, specificity (78.2%, positive predictive value (71.4% and negative predictive value (69.3%. We found a significant relation between the presence of the diagonal earlobe crease and coronary artery disease. We consider it a sign that could prove useful in clinical practice, mainly among patients aged between 30 and 60.

  13. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  14. Uptake and utilization of nutrients by developing kernels of Zea mays L

    International Nuclear Information System (INIS)

    Lyznik, L.A.

    1987-01-01

    The mechanisms involved in amino acid and sugar uptake by developing maize kernels were investigated. In the pedicel region of maize kernel, the site of nutrient unloading from phloem terminals, amino acids are accumulated in considerable amounts and undergo significant interconversion. A wide spectrum of enzymatic activities involved in the metabolism of amino acids is observed in these tissues. Subsequently, amino acids are taken up by the endosperm tissue in processes which require energy and the presence of carrier proteins. Conversely, no evidence was found that energy and carriers are involved in sugar uptake. This process of sugar uptake is not inhibited by metabolic inhibitors and shows nonsaturable kinetics, but the uptake is pH-dependent. L-glucose is taken up at a significantly reduced rate in comparison to D-glucose uptake. Based on analysis of radioactivity distribution among sugar fractions after incubations of kernels with radiolabeled D-glucose, it seems that sucrose is not efficiently resynthesized from D-glucose in the endosperm tissue. Thus, the proposed mechanism of sucrose transport involving sucrose hydrolysis in the pedicel region and subsequent resynthesis in endosperm cells may not be the main pathway. The evidence that transfer cells play an active role in D-glucose transport is presented

  15. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  16. The resolution of field identification fixed points in diagonal coset theories

    International Nuclear Information System (INIS)

    Fuchs, J.; Schellekens, B.; Schweigert, C.

    1995-09-01

    The fixed point resolution problem is solved for diagonal coset theories. The primary fields into which the fixed points are resolved are described by submodules of the branching spaces, obtained as eigenspaces of the automorphisms that implement field identification. To compute the characters and the modular S-matrix we use ''orbit Lie algebras'' and ''twining characters'', which were introduced in a previous paper. The characters of the primary fields are expressed in terms branching functions of twining characters. This allows us to express the modular S-matrix through the S-matrices of the orbit Lie algebras associated to the identification group. Our results can be extended to the larger class of ''generalized diagonal cosets''. (orig.)

  17. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  18. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang

    2017-10-27

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling\\'s tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  19. Diagonal Likelihood Ratio Test for Equality of Mean Vectors in High-Dimensional Data

    KAUST Repository

    Hu, Zongliang; Tong, Tiejun; Genton, Marc G.

    2017-01-01

    We propose a likelihood ratio test framework for testing normal mean vectors in high-dimensional data under two common scenarios: the one-sample test and the two-sample test with equal covariance matrices. We derive the test statistics under the assumption that the covariance matrices follow a diagonal matrix structure. In comparison with the diagonal Hotelling's tests, our proposed test statistics display some interesting characteristics. In particular, they are a summation of the log-transformed squared t-statistics rather than a direct summation of those components. More importantly, to derive the asymptotic normality of our test statistics under the null and local alternative hypotheses, we do not require the assumption that the covariance matrix follows a diagonal matrix structure. As a consequence, our proposed test methods are very flexible and can be widely applied in practice. Finally, simulation studies and a real data analysis are also conducted to demonstrate the advantages of our likelihood ratio test method.

  20. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  1. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Science.gov (United States)

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  3. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  4. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  5. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  6. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  7. Dielectric properties of almond kernels associated with radio frequency and microwave pasteurization

    Science.gov (United States)

    Li, Rui; Zhang, Shuang; Kou, Xiaoxi; Ling, Bo; Wang, Shaojin

    2017-02-01

    To develop advanced pasteurization treatments based on radio frequency (RF) or microwave (MW) energy, dielectric properties of almond kernels were measured by using an open-ended coaxial-line probe and impedance analyzer at frequencies between 10 and 3000 MHz, moisture contents between 4.2% to 19.6% w.b. and temperatures between 20 and 90 °C. The results showed that both dielectric constant and loss factor of the almond kernels decreased sharply with increasing frequency over the RF range (10-300 MHz), but gradually over the measured MW range (300-3000 MHz). Both dielectric constant and loss factor of almond kernels increased with increasing temperature and moisture content, and largely enhanced at higher temperature and moisture levels. Quadratic polynomial equations were developed to best fit the relationship between dielectric constant or loss factor at 27, 40, 915 or 2450 MHz and sample temperature/moisture content with R2 greater than 0.967. Penetration depth of electromagnetic wave into samples decreased with increasing frequency (27-2450 MHz), moisture content (4.2-19.6% w.b.) and temperature (20-90 °C). The temperature profiles of RF heated almond kernels under three moisture levels were made using experiment and computer simulation based on measured dielectric properties. Based on the result of this study, RF treatment has potential to be practically used for pasteurization of almond kernels with acceptable heating uniformity.

  8. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  9. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  10. Stochastic multiresonance for a fractional linear oscillator with time-delayed kernel and quadratic noise

    Science.gov (United States)

    Guo, Feng; Wang, Xue-Yuan; Zhu, Cheng-Yin; Cheng, Xiao-Feng; Zhang, Zheng-Yu; Huang, Xu-Hui

    2017-12-01

    The stochastic resonance for a fractional oscillator with time-delayed kernel and quadratic trichotomous noise is investigated. Applying linear system theory and Laplace transform, the system output amplitude (SPA) for the fractional oscillator is obtained. It is found that the SPA is a periodical function of the kernel delayed-time. Stochastic multiplicative phenomenon appears on the SPA versus the driving frequency, versus the noise amplitude, and versus the fractional exponent. The non-monotonous dependence of the SPA on the system parameters is also discussed.

  11. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  12. Diagonal K-matrices and transfer matrix eigenspectra associated with the G(1)2 R-matrix

    International Nuclear Information System (INIS)

    Yung, C.M.; Batchelor, M.T.

    1995-01-01

    We find all the diagonal K-matrices for the R-matrix associated with the minimal representation of the exceptional affine algebra G (1) 2 . The corresponding transfer matrices are diagonalized with a variation of the analytic Bethe ansatz. We find many similarities with the case of the Izergin-Korepin R-matrix associated with the affine algebra A (2) 2 . ((orig.))

  13. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  14. Energy relaxation and transfer in excitonic trimer

    International Nuclear Information System (INIS)

    Herman, Pavel; Barvik, Ivan; Urbanec, Martin

    2004-01-01

    Two models describing exciton relaxation and transfer (the Redfield model in the secular approximation and Capek's model) are compared for a simple example - a symmetric trimer coupled to a phonon bath. Energy transfer within the trimer occurs via resonance interactions and coupling between the trimer and the bath occurs via modulation of the monomer energies by phonons. Two initial conditions are adopted: (1) one of higher eigenstates of the trimer is initially occupied and (2) one local site of the trimer is initially occupied. The diagonal exciton density matrix elements in the representation of eigenstates are found to be the same for both models, but this is not so for the off-diagonal density matrix elements. Only if the off-diagonal density matrix elements vanish initially (initial condition (1)), they then vanish at arbitrary times in both models. If the initial excitation is local, the off-diagonal matrix elements essentially differ

  15. Sizes of flaring kernels in various parts of the Hα line profile

    Directory of Open Access Journals (Sweden)

    K. Radziszewski

    2008-10-01

    Full Text Available In this paper we present new results of spectra-photometrical investigations of the flaring kernels' sizes and their intensities measured simultaneously in various parts of the Hα line profile. Our investigations were based on the very high temporal resolution spectral-imaging observations of the solar flares collected with Large Coronagraph (LC, Multi-channel Subtractive Double Pass Spectrograph and Solar Eclipse Coronal Imaging System (MSDP-SECIS at Białkow Observatory (University of Wrocław, Poland.

    We have found that the areas of the investigated individual flaring kernels vary in time and in wavelengths, as well as the intensities and areas of the Hα flaring kernels decreased systematically when observed in consecutive wavelengths toward the wings of the Hα line. Our result could be explained as an effect of the cone-shaped lower parts of the magnetic loops channeling high energy particle beams exciting chromospheric plasma.

  16. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  17. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  18. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  19. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  20. Semiclassical delta self-energy

    International Nuclear Information System (INIS)

    Voutier, E.

    1992-01-01

    We present a semiclassical approach in the Δ self-energy. We show that the in-medium corrections of the Δ width issued from the Pauli blocking and the coupling to the 2N-1h continuum are in good agreement with the previous approaches and particularly with the quantum Δ-h model even for light nuclei. We separate out the different sources of the imaginary part of the self-energy. The predominant corrections come from two antagonistic origins: The Pauli blocking and the contribution to the two-nucleon emission channel, the latter being model dependent. We further show that the non-diagonal spin matrix elements of the self-energy, generated by its tensor component, are mostly due to the Pauli blocking. (orig.)

  1. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  2. Solving block linear systems with low-rank off-diagonal blocks is easily parallelizable

    Energy Technology Data Exchange (ETDEWEB)

    Menkov, V. [Indiana Univ., Bloomington, IN (United States)

    1996-12-31

    An easily and efficiently parallelizable direct method is given for solving a block linear system Bx = y, where B = D + Q is the sum of a non-singular block diagonal matrix D and a matrix Q with low-rank blocks. This implicitly defines a new preconditioning method with an operation count close to the cost of calculating a matrix-vector product Qw for some w, plus at most twice the cost of calculating Qw for some w. When implemented on a parallel machine the processor utilization can be as good as that of those operations. Order estimates are given for the general case, and an implementation is compared to block SSOR preconditioning.

  3. Fast scalar data buffering interface in Linux 2.6 kernel

    International Nuclear Information System (INIS)

    Homs, A.

    2012-01-01

    Key instrumentation devices like counter/timers, analog-to-digital converters and encoders provide scalar data input. Many of them allow fast acquisitions, but do not provide hardware triggering or buffering mechanisms. A Linux 2.4 kernel driver called Hook was developed at the ESRF as a generic software-triggered buffering interface. This work presents the portage of the ESRF Hook interface to the Linux 2.6 kernel. The interface distinguishes 2 independent functional groups: trigger event generators and data channels. Devices in the first group create software events, like hardware interrupts generated by timers or external signals. On each event, one or more device channels on the second group are read and stored in kernel buffers. The event generators and data channels to be read are fully configurable before each sequence. Designed for fast acquisitions, the Hook implementation is well adapted to multi-CPU systems, where the interrupt latency is notably reduced. On heavily loaded dual-core PCs running standard (non real time) Linux, data can be taken at 1 KHz without losing events. Additional features include full integration into the /sys virtual file-system and hot-plug devices support. (author)

  4. Analysis of fast neutrons elastic moderator through exact solutions involving synthetic-kernels

    International Nuclear Information System (INIS)

    Moura Neto, C.; Chung, F.L.; Amorim, E.S.

    1979-07-01

    The computation difficulties in the transport equation solution applied to fast reactors can be reduced by the development of approximate models, assuming that the continuous moderation holds. Two approximations were studied. The first one was based on an expansion in Taylor's series (Fermi, Wigner, Greuling and Goertzel models), and the second involving the utilization of synthetic Kernels (Walti, Turinsky, Becker and Malaviya models). The flux obtained by the exact method is compared with the fluxes from the different models based on synthetic Kernels. It can be verified that the present study is realistic for energies smaller than the threshold for inelastic scattering, as well as in the resonance region. (Author) [pt

  5. Non-degenerate single-particle energies in the Ginocchio model

    International Nuclear Information System (INIS)

    Leviatan, A.; Kirson, M.W.

    1984-01-01

    A one-body operator expressing the breaking of the degeneracy of the single-nucleon energies is added to the pairing interaction of the Ginocchio model. This operator couples states inside the model's SD space to states outside it. The influence of this coupling on the effective interaction in the SD space and the possibility of expressing the results in terms of renormalization of parameters in the fermion hamiltonian or the IBM are investigated. The effective interaction is found to be almost diagonal in seniority, while splitting the previously-degenerate seniority multiplets. Appropriately renormalized Ginocchio and IBM hamiltonians can approximately reproduce the results, but fermion-number dependence of the hamiltonian parameters and explicit three-body interactions are needed to reproduce the computed effects exactly. (orig.)

  6. Non-degenerate single-particle energies in the Ginocchio model

    International Nuclear Information System (INIS)

    Leviatan, A.; Kirson, M.W.

    1983-07-01

    A one-body operator expressing the breaking of the degeneracy of the single-nucleon energies is added to the pairing interaction of the Ginocchio model. This operator couples states inside the model's S-D space to states outside it. The influence of this coupling on the effective interaction in the S-D space and the possibility of expressing the results in terms of renormalization of parameters in the fermion hamiltonian or the IBM are investigated. The effective interaction is found to be almost diagonal in seniority, while splitting the previously-degenerate seniority multiplets. Appropiately renormalized Ginocchio and IBM hamiltonians can approximately reproduce the results, but fermion-number dependence of the hamiltonian parameters and explicit three-body interactions are needed to reproduce the computed effects exactly. (author)

  7. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  8. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  9. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  10. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  11. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  12. Study on the scattering law and scattering kernel of hydrogen in zirconium hydride

    International Nuclear Information System (INIS)

    Jiang Xinbiao; Chen Wei; Chen Da; Yin Banghua; Xie Zhongsheng

    1999-01-01

    The nuclear analytical model of calculating scattering law and scattering kernel for the uranium zirconium hybrid reactor is described. In the light of the acoustic and optic model of zirconium hydride, its frequency distribution function f(ω) is given and the scattering law of hydrogen in zirconium hydride is obtained by GASKET. The scattering kernel σ l (E 0 →E) of hydrogen bound in zirconium hydride is provided by the SMP code in the standard WIMS cross section library. Along with this library, WIMS is used to calculate the thermal neutron energy spectrum of fuel cell. The results are satisfied

  13. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Marshall, D.W. [Idaho National Laboratory, 2525 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83415 (United States)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer Forming fluid selection criteria developed for TRISO kernel production. Black-Right-Pointing-Pointer Ten candidates selected for further study. Black-Right-Pointing-Pointer Density, viscosity, and surface tension measured for first time. Black-Right-Pointing-Pointer Settling velocity and heat transfer rates calculated. Black-Right-Pointing-Pointer Three fluids recommended for kernel production testing. - Abstract: Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of {approx}10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 Degree-Sign C and 80 Degree-Sign C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory

  14. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  15. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  16. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  17. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  18. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  19. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  20. Comparison Algorithm Kernels on Support Vector Machine (SVM To Compare The Trend Curves with Curves Online Forex Trading

    Directory of Open Access Journals (Sweden)

    irfan abbas

    2017-01-01

    Full Text Available At this time, the players Forex Trading generally still use the data exchange in the form of a Forex Trading figures from different sources. Thus they only receive or know the data rate of a Forex Trading prevailing at the time just so difficult to analyze or predict exchange rate movements future. Forex players usually use the indicators to enable them to analyze and memperdiksi future value. Indicator is a decision making tool. Trading forex is trading currency of a country, the other country's currency. Trading took place globally between the financial centers of the world with the involvement of the world's major banks as the major transaction. Trading Forex offers profitable investment type with a small capital and high profit, with relatively small capital can earn profits doubled. This is due to the forex trading systems exist leverage which the invested capital will be doubled if the predicted results of buy / sell is accurate, but Trading Forex having high risk level, but by knowing the right time to trade (buy or sell, the losses can be avoided. Traders who invest in the foreign exchange market is expected to have the ability to analyze the circumstances and situations in predicting the difference in currency exchange rates. Forex price movements that form the pattern (curve up and down greatly assist traders in making decisions. The movement of the curve used as an indicator in the decision to purchase (buy or sell (sell. This study compares (Comparation type algorithm kernel on Support Vector Machine (SVM to predict the movement of the curve in live time trading forex using the data GBPUSD, 1H. Results of research on the study of the results and discussion can be concluded that the Kernel Dot, Kernel Multiquaric, Kernel Neural inappropriately used for data is non-linear in the case of data forex to follow the pattern of trend curves, because curves generated curved linear (straight and then to type of kernel is the closest curve

  1. Higher-order predictions for splitting functions and coefficient functions from physical evolution kernels

    International Nuclear Information System (INIS)

    Vogt, A; Soar, G.; Vermaseren, J.A.M.

    2010-01-01

    We have studied the physical evolution kernels for nine non-singlet observables in deep-inelastic scattering (DIS), semi-inclusive e + e - annihilation and the Drell-Yan (DY) process, and for the flavour-singlet case of the photon- and heavy-top Higgs-exchange structure functions (F 2 , F φ ) in DIS. All known contributions to these kernels show an only single-logarithmic large-x enhancement at all powers of (1-x). Conjecturing that this behaviour persists to (all) higher orders, we have predicted the highest three (DY: two) double logarithms of the higher-order non-singlet coefficient functions and of the four-loop singlet splitting functions. The coefficient-function predictions can be written as exponentiations of 1/N-suppressed contributions in Mellin-N space which, however, are less predictive than the well-known exponentiation of the ln k N terms. (orig.)

  2. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  3. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  4. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  5. Analytic study of the off-diagonal mass generation for Yang-Mills theories in the maximal Abelian gauge

    International Nuclear Information System (INIS)

    Dudal, D.; Verschelde, H.; Gracey, J.A.; Lemes, V.E.R.; Sobreiro, R.F.; Sorella, S.P.; Sarandy, M.S.

    2004-01-01

    We investigate a dynamical mass generation mechanism for the off-diagonal gluons and ghosts in SU(N) Yang-Mills theories, quantized in the maximal Abelian gauge. Such a mass can be seen as evidence for the Abelian dominance in that gauge. It originates from the condensation of a mixed gluon-ghost operator of mass dimension two, which lowers the vacuum energy. We construct an effective potential for this operator by a combined use of the local composite operators technique with the algebraic renormalization and we discuss the gauge parameter independence of the results. We also show that it is possible to connect the vacuum energy, due to the mass dimension-two condensate discussed here, with the nontrivial vacuum energy originating from the condensate μ 2 >, which has attracted much attention in the Landau gauge

  6. Why the South Pacific Convergence Zone is diagonal

    OpenAIRE

    Van Der Wiel, Karin; Matthews, Adrian; Joshi, Manoj; Stevens, David

    2016-01-01

    During austral summer, the majority of precipitation over the Pacific Ocean is concentrated in the South Pacific Convergence Zone (SPCZ). The surface boundary conditions required to support the diagonally (northwest-southeast) oriented SPCZ are determined through a series of experiments with an atmospheric general circulation model. Continental configuration and orography do not have a significant influence on SPCZ orientation and strength. The key necessary boundary condition is the zonally ...

  7. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  8. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  9. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  10. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  11. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  12. The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels

    International Nuclear Information System (INIS)

    Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi

    2015-01-01

    The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)

  13. Breaking Megrelishvili protocol using matrix diagonalization

    Science.gov (United States)

    Arzaki, Muhammad; Triantoro Murdiansyah, Danang; Adi Prabowo, Satrio

    2018-03-01

    In this article we conduct a theoretical security analysis of Megrelishvili protocol—a linear algebra-based key agreement between two participants. We study the computational complexity of Megrelishvili vector-matrix problem (MVMP) as a mathematical problem that strongly relates to the security of Megrelishvili protocol. In particular, we investigate the asymptotic upper bounds for the running time and memory requirement of the MVMP that involves diagonalizable public matrix. Specifically, we devise a diagonalization method for solving the MVMP that is asymptotically faster than all of the previously existing algorithms. We also found an important counterintuitive result: the utilization of primitive matrix in Megrelishvili protocol makes the protocol more vulnerable to attacks.

  14. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    Science.gov (United States)

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  15. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  16. Globally convergent optimization algorithm using conservative convex separable diagonal quadratic approximations

    NARCIS (Netherlands)

    Groenwold, A.A.; Wood, D.W.; Etman, L.F.P.; Tosserams, S.

    2009-01-01

    We implement and test a globally convergent sequential approximate optimization algorithm based on (convexified) diagonal quadratic approximations. The algorithm resides in the class of globally convergent optimization methods based on conservative convex separable approximations developed by

  17. Stability of matrices with sufficiently strong negative-dominant-diagonal submatrices

    NARCIS (Netherlands)

    Nieuwenhuis, H.J.; Schoonbeek, L.

    A well-known sufficient condition for stability of a system of linear first-order differential equations is that the matrix of the homogeneous dynamics has a negative dominant diagonal. However, this condition cannot be applied to systems of second-order differential equations. In this paper we

  18. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  19. Spectral/spatial optical CDMA code based on Diagonal Eigenvalue Unity

    Science.gov (United States)

    Najjar, Monia; Jellali, Nabiha; Ferchichi, Moez; Rezig, Houria

    2017-11-01

    A new two dimensional Diagonal Eigenvalue Unity (2D-DEU) code is developed for the spectral⧹spatial optical code division multiple access (OCDMA) system. It has a lower cross correlation value compared to two dimensional diluted perfect difference (2D-DPD), two dimensional Extended Enhanced Double Weight (2D-Extended-EDW) codes. Also, for the same code length, the number of users can be generated by the 2D-DEU code is higher than that provided by the others codes. The Bit Error Rate (BER) numerical analysis is developed by considering the effects of shot noise, phase induced intensity noise (PIIN), and thermal noise. The main result shows that BER is strongly affected by PIIN for the higher source power. The 2D-DEU code performance is compared with 2D-DPD, 2D-Extended-EDW and two dimensional multi-diagonals (2D-MD) codes. This comparison proves that the proposed 2D-DEU system outperforms the related codes.

  20. Modified Dynamical Supergravity Breaking and Off-Diagonal Super-Higgs Effects

    CERN Document Server

    Gheorghiu, Tamara; Vacaru, Sergiu

    2015-01-01

    We argue that generic off-diagonal vacuum and nonvacuum solutions for Einstein manifolds mimic physical effects in modified gravity theories (MGTs) and encode certain models of $f(R,T,...)$, Ho\\vrava type with dynamical Lorentz symmetry breaking, induced effective mass for graviton etc. Our main goal is to investigate the dynamical breaking of local supersymmetry determined by off--diagonal solutions in MGTs encoded as effective Einstein spaces. This includes the Deser-Zumino super--Higgs effect, for instance, for an one--loop potential in a (simple but representative) model of $\\mathcal{N}=1, D=4$ supergravity. We develop and apply a new geometric techniques which allows us to decouple the gravitational field equations and integrate them in very general forms with metrics and vierbein fields depending on all spacetime coordinates via various generating and integration functions and parameters. We study how solutions in MGTs may be related to dynamical generation of a gravitino mass and supergravity breaking.

  1. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  2. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  3. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  4. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    International Nuclear Information System (INIS)

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  5. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  6. Normalizing computed tomography data reconstructed with different filter kernels: effect on emphysema quantification

    Energy Technology Data Exchange (ETDEWEB)

    Gallardo-Estrella, Leticia; Prokop, Mathias [Radboud University Nijmegen Medical Center, Geert Grooteplein 10 (route 767), P.O. Box 9101, Nijmegen (766) (Netherlands); Lynch, David A.; Stinson, Douglas; Zach, Jordan [National Jewish Health, Denver, CO (United States); Judy, Philip F. [Brigham and Women' s Hospital, Boston, MA (United States); Ginneken, Bram van; Rikxoort, Eva M. van [Radboud University Nijmegen Medical Center, Geert Grooteplein 10 (route 767), P.O. Box 9101, Nijmegen (766) (Netherlands); Fraunhofer MEVIS, Bremen (Germany)

    2016-02-15

    To propose and evaluate a method to reduce variability in emphysema quantification among different computed tomography (CT) reconstructions by normalizing CT data reconstructed with varying kernels. We included 369 subjects from the COPDGene study. For each subject, spirometry and a chest CT reconstructed with two kernels were obtained using two different scanners. Normalization was performed by frequency band decomposition with hierarchical unsharp masking to standardize the energy in each band to a reference value. Emphysema scores (ES), the percentage of lung voxels below -950 HU, were computed before and after normalization. Bland-Altman analysis and correlation between ES and spirometry before and after normalization were compared. Two mixed cohorts, containing data from all scanners and kernels, were created to simulate heterogeneous acquisition parameters. The average difference in ES between kernels decreased for the scans obtained with both scanners after normalization (7.7 ± 2.7 to 0.3 ± 0.7; 7.2 ± 3.8 to -0.1 ± 0.5). Correlation coefficients between ES and FEV{sub 1}, and FEV{sub 1}/FVC increased significantly for the mixed cohorts. Normalization of chest CT data reduces variation in emphysema quantification due to reconstruction filters and improves correlation between ES and spirometry. (orig.)

  7. Normalizing computed tomography data reconstructed with different filter kernels: effect on emphysema quantification

    International Nuclear Information System (INIS)

    Gallardo-Estrella, Leticia; Prokop, Mathias; Lynch, David A.; Stinson, Douglas; Zach, Jordan; Judy, Philip F.; Ginneken, Bram van; Rikxoort, Eva M. van

    2016-01-01

    To propose and evaluate a method to reduce variability in emphysema quantification among different computed tomography (CT) reconstructions by normalizing CT data reconstructed with varying kernels. We included 369 subjects from the COPDGene study. For each subject, spirometry and a chest CT reconstructed with two kernels were obtained using two different scanners. Normalization was performed by frequency band decomposition with hierarchical unsharp masking to standardize the energy in each band to a reference value. Emphysema scores (ES), the percentage of lung voxels below -950 HU, were computed before and after normalization. Bland-Altman analysis and correlation between ES and spirometry before and after normalization were compared. Two mixed cohorts, containing data from all scanners and kernels, were created to simulate heterogeneous acquisition parameters. The average difference in ES between kernels decreased for the scans obtained with both scanners after normalization (7.7 ± 2.7 to 0.3 ± 0.7; 7.2 ± 3.8 to -0.1 ± 0.5). Correlation coefficients between ES and FEV 1 , and FEV 1 /FVC increased significantly for the mixed cohorts. Normalization of chest CT data reduces variation in emphysema quantification due to reconstruction filters and improves correlation between ES and spirometry. (orig.)

  8. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  9. Modelling dense relational data

    DEFF Research Database (Denmark)

    Herlau, Tue; Mørup, Morten; Schmidt, Mikkel Nørgaard

    2012-01-01

    they are not naturally suited for kernel K-means. We propose a generative Bayesian model for dense matrices which generalize kernel K-means to consider off-diagonal interactions in matrices of interactions, and demonstrate its ability to detect structure on both artificial data and two real data sets....

  10. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  11. The role of Tre6P and SnRK1 in maize early kernel development and events leading to stress-induced kernel abortion.

    Science.gov (United States)

    Bledsoe, Samuel W; Henry, Clémence; Griffiths, Cara A; Paul, Matthew J; Feil, Regina; Lunn, John E; Stitt, Mark; Lagrimini, L Mark

    2017-04-12

    Drought stress during flowering is a major contributor to yield loss in maize. Genetic and biotechnological improvement in yield sustainability requires an understanding of the mechanisms underpinning yield loss. Sucrose starvation has been proposed as the cause for kernel abortion; however, potential targets for genetic improvement have not been identified. Field and greenhouse drought studies with maize are expensive and it can be difficult to reproduce results; therefore, an in vitro kernel culture method is presented as a proxy for drought stress occurring at the time of flowering in maize (3 days after pollination). This method is used to focus on the effects of drought on kernel metabolism, and the role of trehalose 6-phosphate (Tre6P) and the sucrose non-fermenting-1-related kinase (SnRK1) as potential regulators of this response. A precipitous drop in Tre6P is observed during the first two hours after removing the kernels from the plant, and the resulting changes in transcript abundance are indicative of an activation of SnRK1, and an immediate shift from anabolism to catabolism. Once Tre6P levels are depleted to below 1 nmol∙g -1 FW in the kernel, SnRK1 remained active throughout the 96 h experiment, regardless of the presence or absence of sucrose in the medium. Recovery on sucrose enriched medium results in the restoration of sucrose synthesis and glycolysis. Biosynthetic processes including the citric acid cycle and protein and starch synthesis are inhibited by excision, and do not recover even after the re-addition of sucrose. It is also observed that excision induces the transcription of the sugar transporters SUT1 and SWEET1, the sucrose hydrolyzing enzymes CELL WALL INVERTASE 2 (INCW2) and SUCROSE SYNTHASE 1 (SUSY1), the class II TREHALOSE PHOSPHATE SYNTHASES (TPS), TREHALASE (TRE), and TREHALOSE PHOSPHATE PHOSPHATASE (ZmTPPA.3), previously shown to enhance drought tolerance (Nuccio et al., Nat Biotechnol (October 2014):1-13, 2015). The impact

  12. Accurate convolution/superposition for multi-resolution dose calculation using cumulative tabulated kernels

    International Nuclear Information System (INIS)

    Lu Weiguo; Olivera, Gustavo H; Chen Mingli; Reckwerdt, Paul J; Mackie, Thomas R

    2005-01-01

    Convolution/superposition (C/S) is regarded as the standard dose calculation method in most modern radiotherapy treatment planning systems. Different implementations of C/S could result in significantly different dose distributions. This paper addresses two major implementation issues associated with collapsed cone C/S: one is how to utilize the tabulated kernels instead of analytical parametrizations and the other is how to deal with voxel size effects. Three methods that utilize the tabulated kernels are presented in this paper. These methods differ in the effective kernels used: the differential kernel (DK), the cumulative kernel (CK) or the cumulative-cumulative kernel (CCK). They result in slightly different computation times but significantly different voxel size effects. Both simulated and real multi-resolution dose calculations are presented. For simulation tests, we use arbitrary kernels and various voxel sizes with a homogeneous phantom, and assume forward energy transportation only. Simulations with voxel size up to 1 cm show that the CCK algorithm has errors within 0.1% of the maximum gold standard dose. Real dose calculations use a heterogeneous slab phantom, both the 'broad' (5 x 5 cm 2 ) and the 'narrow' (1.2 x 1.2 cm 2 ) tomotherapy beams. Various voxel sizes (0.5 mm, 1 mm, 2 mm, 4 mm and 8 mm) are used for dose calculations. The results show that all three algorithms have negligible difference (0.1%) for the dose calculation in the fine resolution (0.5 mm voxels). But differences become significant when the voxel size increases. As for the DK or CK algorithm in the broad (narrow) beam dose calculation, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 10% (7%) of the maximum dose. As for the broad (narrow) beam dose calculation using the CCK algorithm, the dose differences between the 0.5 mm voxels and the voxels up to 8 mm (4 mm) are around 1% of the maximum dose. Among all three methods, the CCK algorithm

  13. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  14. Investigation of Diagonal Antenna-Chassis Mode in Mobile Terminal LTE MIMO Antennas for Bandwidth Enhancement

    DEFF Research Database (Denmark)

    Zhang, Shuai; Zhao, Kun; Ying, Zhinong

    2015-01-01

    mechanism of the mismatch of these three bandwidth ranges is also explained. Furthermore, the diagonal antenna-chassis mode is also studied for MIMO elements in the adjacent and diagonal corner locations. As a practical example, a wideband collocated LTE MIMO antenna is proposed and measured. It covers......A diagonal antenna-chassis mode is investigated in long-term evolution multiple-input-multiple-output (LTE MIMO) antennas. The MIMO bandwidth is defined in this paper as the overlap range of the low-envelope correlation coefficient, high total efficiency, and -6-dB impedance matching bandwidths...... the bands of 740960 and 1700-2700 MHz, where the total efficiencies are better than -3.4 and -1.8 dB, with lower than 0.5 and 0.1, respectively. The measurements agree well with the simulations. Since the proposed method only needs to modify the excitation locations of the MIMO elements on the chassis...

  15. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    Science.gov (United States)

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  16. Kernel-based noise filtering of neutron detector signals

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki

    2007-01-01

    This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests

  17. A point kernel shielding code, PKN-HP, for high energy proton incident

    Energy Technology Data Exchange (ETDEWEB)

    Kotegawa, Hiroshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-06-01

    A point kernel integral technique code PKN-HP, and the related thick target neutron yield data have been developed to calculate neutron and secondary gamma-ray dose equivalents in ordinary concrete and iron shields for fully stopping length C, Cu and U-238 target neutrons produced by 100 MeV-10 GeV proton incident in a 3-dimensional geometry. The comparisons among calculation results of the present code and other calculation techniques, and measured values showed the usefulness of the code. (author)

  18. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    Science.gov (United States)

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain

    Science.gov (United States)

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C. M.; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions

  20. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  1. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  2. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  3. Sensitivity kernels for viscoelastic loading based on adjoint methods

    Science.gov (United States)

    Al-Attar, David; Tromp, Jeroen

    2014-01-01

    Observations of glacial isostatic adjustment (GIA) allow for inferences to be made about mantle viscosity, ice sheet history and other related parameters. Typically, this inverse problem can be formulated as minimizing the misfit between the given observations and a corresponding set of synthetic data. When the number of parameters is large, solution of such optimization problems can be computationally challenging. A practical, albeit non-ideal, solution is to use gradient-based optimization. Although the gradient of the misfit required in such methods could be calculated approximately using finite differences, the necessary computation time grows linearly with the number of model parameters, and so this is often infeasible. A far better approach is to apply the `adjoint method', which allows the exact gradient to be calculated from a single solution of the forward problem, along with one solution of the associated adjoint problem. As a first step towards applying the adjoint method to the GIA inverse problem, we consider its application to a simpler viscoelastic loading problem in which gravitationally self-consistent ocean loading is neglected. The earth model considered is non-rotating, self-gravitating, compressible, hydrostatically pre-stressed, laterally heterogeneous and possesses a Maxwell solid rheology. We determine adjoint equations and Fréchet kernels for this problem based on a Lagrange multiplier method. Given an objective functional J defined in terms of the surface deformation fields, we show that its first-order perturbation can be written δ J = int _{MS}K_{η }δ ln η dV +int _{t0}^{t1}int _{partial M}K_{dot{σ }} δ dot{σ } dS dt, where δ ln η = δη/η denotes relative viscosity variations in solid regions MS, dV is the volume element, δ dot{σ } is the perturbation to the time derivative of the surface load which is defined on the earth model's surface ∂M and for times [t0, t1] and dS is the surface element on ∂M. The `viscosity

  4. Implementation of the diagonalization-free algorithm in the self-consistent field procedure within the four-component relativistic scheme.

    Science.gov (United States)

    Hrdá, Marcela; Kulich, Tomáš; Repiský, Michal; Noga, Jozef; Malkina, Olga L; Malkin, Vladimir G

    2014-09-05

    A recently developed Thouless-expansion-based diagonalization-free approach for improving the efficiency of self-consistent field (SCF) methods (Noga and Šimunek, J. Chem. Theory Comput. 2010, 6, 2706) has been adapted to the four-component relativistic scheme and implemented within the program package ReSpect. In addition to the implementation, the method has been thoroughly analyzed, particularly with respect to cases for which it is difficult or computationally expensive to find a good initial guess. Based on this analysis, several modifications of the original algorithm, refining its stability and efficiency, are proposed. To demonstrate the robustness and efficiency of the improved algorithm, we present the results of four-component diagonalization-free SCF calculations on several heavy-metal complexes, the largest of which contains more than 80 atoms (about 6000 4-spinor basis functions). The diagonalization-free procedure is about twice as fast as the corresponding diagonalization. Copyright © 2014 Wiley Periodicals, Inc.

  5. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. A Spectral-Texture Kernel-Based Classification Method for Hyperspectral Images

    Directory of Open Access Journals (Sweden)

    Yi Wang

    2016-11-01

    Full Text Available Classification of hyperspectral images always suffers from high dimensionality and very limited labeled samples. Recently, the spectral-spatial classification has attracted considerable attention and can achieve higher classification accuracy and smoother classification maps. In this paper, a novel spectral-spatial classification method for hyperspectral images by using kernel methods is investigated. For a given hyperspectral image, the principle component analysis (PCA transform is first performed. Then, the first principle component of the input image is segmented into non-overlapping homogeneous regions by using the entropy rate superpixel (ERS algorithm. Next, the local spectral histogram model is applied to each homogeneous region to obtain the corresponding texture features. Because this step is performed within each homogenous region, instead of within a fixed-size image window, the obtained local texture features in the image are more accurate, which can effectively benefit the improvement of classification accuracy. In the following step, a contextual spectral-texture kernel is constructed by combining spectral information in the image and the extracted texture information using the linearity property of the kernel methods. Finally, the classification map is achieved by the support vector machines (SVM classifier using the proposed spectral-texture kernel. Experiments on two benchmark airborne hyperspectral datasets demonstrate that our method can effectively improve classification accuracies, even though only a very limited training sample is available. Specifically, our method can achieve from 8.26% to 15.1% higher in terms of overall accuracy than the traditional SVM classifier. The performance of our method was further compared to several state-of-the-art classification methods of hyperspectral images using objective quantitative measures and a visual qualitative evaluation.

  7. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  8. ACORNS, Covariance and Correlation Matrix Diagonalization

    International Nuclear Information System (INIS)

    Szondi, E.J.

    1990-01-01

    1 - Description of program or function: The program allows the user to verify the different types of covariance/correlation matrices used in the activation neutron spectrometry. 2 - Method of solution: The program performs the diagonalization of the input covariance/relative covariance/correlation matrices. The Eigen values are then analyzed to determine the rank of the matrices. If the Eigen vectors of the pertinent correlation matrix have also been calculated, the program can perform a complete factor analysis (generation of the factor matrix and its rotation in Kaiser's 'varimax' sense to select the origin of the correlations). 3 - Restrictions on the complexity of the problem: Matrix size is limited to 60 on PDP and to 100 on IBM PC/AT

  9. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  10. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  11. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  12. Effect of high lying states on the ground and few low lying excited O+ energy levels of some closed-shell nuclei

    International Nuclear Information System (INIS)

    Ayoub, N.Y.

    1980-02-01

    The ground and some excited O + (J=O, T=O positive parity) energy levels of closed-shell nuclei are examined, in an oscillator basis, using matrix techniques. The effect of states outside the mixed (O+2(h/2π)ω). model space in 4 He (namely configurations at 4(h/2π)ω excitation) are taken into account by renormalization using the generalized Rayleigh-Schroedinger perturbation expressions for a mixed multi-configurational model space, where the resultant non-symmetric energy matrices are diagonalized. It is shown that the second-order renormalized O + energy spectrum is close to the corresponding energy spectrum obtained by diagonalizing the O+2+4(h/2π)ω 4 He energy matrix. The effect, on the ground state and the first few low-lying excited O + energy levels, of renormalizing certain parts of the model space energy matrix up to second order in various approximations is also studied in 4 He and 16 O. It is found that the low-lying O + energy levels in these various approximations behave similarly in both 4 He and 16 O. (author)

  13. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  14. Single-kernel analysis of fumonisins and other fungal metabolites in maize from South African subsistence farmers.

    Science.gov (United States)

    Mogensen, J M; Sørensen, S M; Sulyok, M; van der Westhuizen, L; Shephard, G S; Frisvad, J C; Thrane, U; Krska, R; Nielsen, K F

    2011-12-01

    Fumonisins are important Fusarium mycotoxins mainly found in maize and derived products. This study analysed maize from five subsistence farmers in the former Transkei region of South Africa. Farmers had sorted kernels into good and mouldy quality. A total of 400 kernels from 10 batches were analysed; of these 100 were visually characterised as uninfected and 300 as infected. Of the 400 kernels, 15% were contaminated with 1.84-1428 mg kg(-1) fumonisins, and 4% (n=15) had a fumonisin content above 100 mg kg(-1). None of the visually uninfected maize had detectable amounts of fumonisins. The total fumonisin concentration was 0.28-1.1 mg kg(-1) for good-quality batches and 0.03-6.2 mg kg(-1) for mouldy-quality batches. The high fumonisin content in the batches was apparently caused by a small number (4%) of highly contaminated kernels, and removal of these reduced the average fumonisin content by 71%. Of the 400 kernels, 80 were screened for 186 microbial metabolites by liquid chromatography-tandem mass spectrometry, detecting 17 other fungal metabolites, including fusaric acid, equisetin, fusaproliferin, beauvericin, cyclosporins, agroclavine, chanoclavine, rugulosin and emodin. Fusaric acid in samples without fumonisins indicated the possibility of using non-toxinogenic Fusaria as biocontrol agents to reduce fumonisin exposure, as done for Aspergillus flavus. This is the first report of mycotoxin profiling in single naturally infected maize kernels. © 2011 Taylor & Francis

  15. Palm kernel cake obtained from biodiesel production in diets for goats: feeding behavior and physiological parameters.

    Science.gov (United States)

    de Oliveira, R L; de Carvalho, G G P; Oliveira, R L; Tosto, M S L; Santos, E M; Ribeiro, R D X; Silva, T M; Correia, B R; de Rufino, L M A

    2017-10-01

    The objective of this study was to evaluate the effects of the inclusion of palm kernel (Elaeis guineensis) cake in diets for goats on feeding behaviors, rectal temperature, and cardiac and respiratory frequencies. Forty crossbred Boer male, non-castrated goats (ten animals per treatment), with an average age of 90 days and an initial body weight of 15.01 ± 1.76 kg, were used. The goats were fed Tifton 85 (Cynodon spp.) hay and palm kernel supplemented at the rates of 0, 7, 14, and 21% of dry matter (DM). The feeding behaviors (rumination, feeding, and idling times) were observed for three 24-h periods. DM and neutral detergent fiber (NDF) intake values were estimated as the difference between the total DM and NDF contents of the feed offered and the total DM and NDF contents of the orts. There was no effect of palm kernel cake inclusion in goat diets on DM intake (P > 0.05). However, palm kernel cake promoted a linear increase (P kernel cakes had no effects (P > 0.05) on the chewing, feeding, and rumination efficiency (DM and NDF) or on physiological variables. The use up to 21% palm kernel cake in the diet of crossbred Boer goats maintained the feeding behaviors and did not change the physiological parameters of goats; therefore, its use is recommended in the diet of these animals.

  16. Effective and efficient Grassfinch kernel for SVM classification and its application to recognition based on image set

    International Nuclear Information System (INIS)

    Du, Genyuan; Tian, Shengli; Qiu, Yingyu; Xu, Chunyan

    2016-01-01

    This paper presents an effective and efficient kernel approach to recognize image set which is represented as a point on extended Grassmannian manifold. Several recent studies focus on the applicability of discriminant analysis on Grassmannian manifold and suffer from not obtaining the inherent nonlinear structure of the data itself. Therefore, we propose an extension of Grassmannian manifold to address this issue. Instead of using a linear data embedding with PCA, we develop a non-linear data embedding of such manifold using kernel PCA. This paper mainly consider three folds: 1) introduce a non-linear data embedding of extended Grassmannian manifold, 2) derive a distance metric of Grassmannian manifold, 3) develop an effective and efficient Grassmannian kernel for SVM classification. The extended Grassmannian manifold naturally arises in the application to recognition based on image set, such as face and object recognition. Experiments on several standard databases show better classification accuracy. Furthermore, experimental results indicate that our proposed approach significantly reduces time complexity in comparison to graph embedding discriminant analysis.

  17. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  18. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  19. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  20. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  1. Evaluation of the influence of double and triple Gaussian proton kernel models on accuracy of dose calculations for spot scanning technique.

    Science.gov (United States)

    Hirayama, Shusuke; Takayanagi, Taisuke; Fujii, Yusuke; Fujimoto, Rintaro; Fujitaka, Shinichiro; Umezawa, Masumi; Nagamine, Yoshihiko; Hosaka, Masahiro; Yasui, Keisuke; Omachi, Chihiro; Toshito, Toshiyuki

    2016-03-01

    The main purpose in this study was to present the results of beam modeling and how the authors systematically investigated the influence of double and triple Gaussian proton kernel models on the accuracy of dose calculations for spot scanning technique. The accuracy of calculations was important for treatment planning software (TPS) because the energy, spot position, and absolute dose had to be determined by TPS for the spot scanning technique. The dose distribution was calculated by convolving in-air fluence with the dose kernel. The dose kernel was the in-water 3D dose distribution of an infinitesimal pencil beam and consisted of an integral depth dose (IDD) and a lateral distribution. Accurate modeling of the low-dose region was important for spot scanning technique because the dose distribution was formed by cumulating hundreds or thousands of delivered beams. The authors employed a double Gaussian function as the in-air fluence model of an individual beam. Double and triple Gaussian kernel models were also prepared for comparison. The parameters of the kernel lateral model were derived by fitting a simulated in-water lateral dose profile induced by an infinitesimal proton beam, whose emittance was zero, at various depths using Monte Carlo (MC) simulation. The fitted parameters were interpolated as a function of depth in water and stored as a separate look-up table. These stored parameters for each energy and depth in water were acquired from the look-up table when incorporating them into the TPS. The modeling process for the in-air fluence and IDD was based on the method proposed in the literature. These were derived using MC simulation and measured data. The authors compared the measured and calculated absolute doses at the center of the spread-out Bragg peak (SOBP) under various volumetric irradiation conditions to systematically investigate the influence of the two types of kernel models on the dose calculations. The authors investigated the difference

  2. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    International Nuclear Information System (INIS)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays

  3. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  4. Kernel-imbedded Gaussian processes for disease classification using microarray gene expression data

    Directory of Open Access Journals (Sweden)

    Cheung Leo

    2007-02-01

    Full Text Available Abstract Background Designing appropriate machine learning methods for identifying genes that have a significant discriminating power for disease outcomes has become more and more important for our understanding of diseases at genomic level. Although many machine learning methods have been developed and applied to the area of microarray gene expression data analysis, the majority of them are based on linear models, which however are not necessarily appropriate for the underlying connection between the target disease and its associated explanatory genes. Linear model based methods usually also bring in false positive significant features more easily. Furthermore, linear model based algorithms often involve calculating the inverse of a matrix that is possibly singular when the number of potentially important genes is relatively large. This leads to problems of numerical instability. To overcome these limitations, a few non-linear methods have recently been introduced to the area. Many of the existing non-linear methods have a couple of critical problems, the model selection problem and the model parameter tuning problem, that remain unsolved or even untouched. In general, a unified framework that allows model parameters of both linear and non-linear models to be easily tuned is always preferred in real-world applications. Kernel-induced learning methods form a class of approaches that show promising potentials to achieve this goal. Results A hierarchical statistical model named kernel-imbedded Gaussian process (KIGP is developed under a unified Bayesian framework for binary disease classification problems using microarray gene expression data. In particular, based on a probit regression setting, an adaptive algorithm with a cascading structure is designed to find the appropriate kernel, to discover the potentially significant genes, and to make the optimal class prediction accordingly. A Gibbs sampler is built as the core of the algorithm to make

  5. Improved diagonal queue medical image steganography using Chaos theory, LFSR, and Rabin cryptosystem.

    Science.gov (United States)

    Jain, Mamta; Kumar, Anil; Choudhary, Rishabh Charan

    2017-06-01

    In this article, we have proposed an improved diagonal queue medical image steganography for patient secret medical data transmission using chaotic standard map, linear feedback shift register, and Rabin cryptosystem, for improvement of previous technique (Jain and Lenka in Springer Brain Inform 3:39-51, 2016). The proposed algorithm comprises four stages, generation of pseudo-random sequences (pseudo-random sequences are generated by linear feedback shift register and standard chaotic map), permutation and XORing using pseudo-random sequences, encryption using Rabin cryptosystem, and steganography using the improved diagonal queues. Security analysis has been carried out. Performance analysis is observed using MSE, PSNR, maximum embedding capacity, as well as by histogram analysis between various Brain disease stego and cover images.

  6. The Diagon/Gel Implant: A Preliminary Report of 894 Cases

    Directory of Open Access Journals (Sweden)

    Constantin Stan, MD

    2017-07-01

    Full Text Available Background:. The breast has always been perceived as the emblem of femininity. Desire of having an ideal breast form has been of interest for a long time. Methods:. This preliminary article is a retrospective analysis of 894 cases of breast augmentation with Diagon/Gel breast implants covered with a micropolyurethane foam (Microthane. The surgical technique employed is a modified dual plane, which enables us to use a new anatomical implant to move the glandular parenchyma into a higher position. Results:. The study extended from January 2010 to September 2015, during which no breast implant developed Baker grade III or IV capsular contracture (CC and only a few adverse events occurred. Patients reported to be highly satisfied with the final outcome, which was very natural both in the form and movement. Conclusions:. The new concept of Diagon/Gel represents the next step in the evolutionary progress of breast implants and allows the surgeon to perform not only a breast augmentation but also parenchymal elevation, which otherwise would have required a mastopexy, and we have called it breast enhancement.

  7. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    Science.gov (United States)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods.

  8. Kernel-based Joint Feature Selection and Max-Margin Classification for Early Diagnosis of Parkinson’s Disease

    Science.gov (United States)

    Adeli, Ehsan; Wu, Guorong; Saghafi, Behrouz; An, Le; Shi, Feng; Shen, Dinggang

    2017-01-01

    Feature selection methods usually select the most compact and relevant set of features based on their contribution to a linear regression model. Thus, these features might not be the best for a non-linear classifier. This is especially crucial for the tasks, in which the performance is heavily dependent on the feature selection techniques, like the diagnosis of neurodegenerative diseases. Parkinson’s disease (PD) is one of the most common neurodegenerative disorders, which progresses slowly while affects the quality of life dramatically. In this paper, we use the data acquired from multi-modal neuroimaging data to diagnose PD by investigating the brain regions, known to be affected at the early stages. We propose a joint kernel-based feature selection and classification framework. Unlike conventional feature selection techniques that select features based on their performance in the original input feature space, we select features that best benefit the classification scheme in the kernel space. We further propose kernel functions, specifically designed for our non-negative feature types. We use MRI and SPECT data of 538 subjects from the PPMI database, and obtain a diagnosis accuracy of 97.5%, which outperforms all baseline and state-of-the-art methods. PMID:28120883

  9. Characteristics of 201Tl myocardial SPECT and left ventriculography in patients with acute diagonal branch myocardial infarction

    International Nuclear Information System (INIS)

    Tanaka, Takeshi; Aizawa, Tadanori; Katou, Kazuzo; Ogasawara, Ken; Kirigaya, Hajime

    1993-01-01

    Characteristics of 201 Tl myocardial SPECT and ventriculography were studied in 13 patients with acute diagonal branch myocardial infarction. Rest 201 Tl myocardial SPECT and left ventriculography were underwent in chronic phase. In 5 patients electrocardiogram (ECG) changes in acute phase were not definite. In 6 patients it was difficult to identify the obstructed coronary artery with coronary angiography in acute phase. Mean value of maximum creatine phosphokinese (CPK) was 854 (458-1,774) U/l. It seemed to be difficult to diagnose acute diagonal branch myocardial infarction with ECG and/or coronary angiography. In all patients defects were noted on 201 Tl SPECT. Defects were small and noted in the central anterior wall and not in the septum. In 2 patients defects were noted at apex. In left ventriculography dyskinetic motion was noted in 10 patients; one patient showed apical aneurysm and 3 patients showed anterior wall aneurysm. In 3 patients anterior wall showed akinesis. It was concluded that 201 Tl myocardial SPECT were useful for detecting diagonal branch lesion. In case of diagonal branch myocardial infarction size of defects were small and defects were not noted in the septum, however aneurysmal motion was frequently noted. (author)

  10. separation of oil palm kernel and shell mixture using soil and palm

    African Journals Online (AJOL)

    user

    shape and size of the nuts and a good industrial raw material [3]. ... Large-scale mills have automated hydro-cyclone machines with high separation efficiency, however, clay-baths and hydro cyclones are known for their high energy and water consumption .... A mixture of kernel/shell weighing 20kg were poured into pot 1 ...

  11. Performance of diagonal control structures at different operating conditions for polymer electrolyte membrane fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Serra, Maria; Husar, Attila; Feroldi, Diego; Riera, Jordi [Institut de Robotica i Informatica Industrial, Universitat Politecnica de Catalunya, Consejo Superior de Investigaciones Cientificas, C. Llorens i Artigas 4, 08028 Barcelona (Spain)

    2006-08-25

    This work is focused on the selection of operating conditions in polymer electrolyte membrane fuel cells. It analyses efficiency and controllability aspects, which change from one operating point to another. Specifically, several operating points that deliver the same amount of net power are compared, and the comparison is done at different net power levels. The study is based on a complex non-linear model, which has been linearised at the selected operating points. Different linear analysis tools are applied to the linear models and results show important controllability differences between operating points. The performance of diagonal control structures with PI controllers at different operating points is also studied. A method for the tuning of the controllers is proposed and applied. The behaviour of the controlled system is simulated with the non-linear model. Conclusions indicate a possible trade-off between controllability and optimisation of hydrogen consumption. (author)

  12. Production of Depleted UO2Kernels for the Advanced Gas-Cooled Reactor Program for Use in TRISO Coating Development

    International Nuclear Information System (INIS)

    Collins, J.L.

    2004-01-01

    The main objective of the Depleted UO 2 Kernels Production Task at Oak Ridge National Laboratory (ORNL) was to conduct two small-scale production campaigns to produce 2 kg of UO 2 kernels with diameters of 500 ± 20 (micro)m and 3.5 kg of UO 2 kernels with diameters of 350 ± 10 (micro)m for the U.S. Department of Energy Advanced Fuel Cycle Initiative Program. The final acceptance requirements for the UO 2 kernels are provided in the first section of this report. The kernels were prepared for use by the ORNL Metals and Ceramics Division in a development study to perfect the triisotropic (TRISO) coating process. It was important that the kernels be strong and near theoretical density, with excellent sphericity, minimal surface roughness, and no cracking. This report gives a detailed description of the production efforts and results as well as an in-depth description of the internal gelation process and its chemistry. It describes the laboratory-scale gel-forming apparatus, optimum broth formulation and operating conditions, preparation of the acid-deficient uranyl nitrate stock solution, the system used to provide uniform broth droplet formation and control, and the process of calcining and sintering UO 3 · 2H 2 O microspheres to form dense UO 2 kernels. The report also describes improvements and best past practices for uranium kernel formation via the internal gelation process, which utilizes hexamethylenetetramine and urea. Improvements were made in broth formulation and broth droplet formation and control that made it possible in many of the runs in the campaign to produce the desired 350 ± 10-(micro)m-diameter kernels, and to obtain very high yields

  13. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  14. Adaptive PVD Steganography Using Horizontal, Vertical, and Diagonal Edges in Six-Pixel Blocks

    Directory of Open Access Journals (Sweden)

    Anita Pradhan

    2017-01-01

    Full Text Available The traditional pixel value differencing (PVD steganographical schemes are easily detected by pixel difference histogram (PDH analysis. This problem could be addressed by adding two tricks: (i utilizing horizontal, vertical, and diagonal edges and (ii using adaptive quantization ranges. This paper presents an adaptive PVD technique using 6-pixel blocks. There are two variants. The proposed adaptive PVD for 2×3-pixel blocks is known as variant 1, and the proposed adaptive PVD for 3×2-pixel blocks is known as variant 2. For every block in variant 1, the four corner pixels are used to hide data bits using the middle column pixels for detecting the horizontal and diagonal edges. Similarly, for every block in variant 2, the four corner pixels are used to hide data bits using the middle row pixels for detecting the vertical and diagonal edges. The quantization ranges are adaptive and are calculated using the correlation of the two middle column/row pixels with the four corner pixels. The technique performs better as compared to the existing adaptive PVD techniques by possessing higher hiding capacity and lesser distortion. Furthermore, it has been proven that the PDH steganalysis and RS steganalysis cannot detect this proposed technique.

  15. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  16. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  17. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  18. Diagonalization of Bounded Linear Operators on Separable Quaternionic Hilbert Space

    International Nuclear Information System (INIS)

    Feng Youling; Cao, Yang; Wang Haijun

    2012-01-01

    By using the representation of its complex-conjugate pairs, we have investigated the diagonalization of a bounded linear operator on separable infinite-dimensional right quaternionic Hilbert space. The sufficient condition for diagonalizability of quaternionic operators is derived. The result is applied to anti-Hermitian operators, which is essential for solving Schroedinger equation in quaternionic quantum mechanics.

  19. Large-scale exact diagonalizations reveal low-momentum scales of nuclei

    Science.gov (United States)

    Forssén, C.; Carlsson, B. D.; Johansson, H. T.; Sääf, D.; Bansal, A.; Hagen, G.; Papenbrock, T.

    2018-03-01

    Ab initio methods aim to solve the nuclear many-body problem with controlled approximations. Virtually exact numerical solutions for realistic interactions can only be obtained for certain special cases such as few-nucleon systems. Here we extend the reach of exact diagonalization methods to handle model spaces with dimension exceeding 1010 on a single compute node. This allows us to perform no-core shell model (NCSM) calculations for 6Li in model spaces up to Nmax=22 and to reveal the 4He+d halo structure of this nucleus. Still, the use of a finite harmonic-oscillator basis implies truncations in both infrared (IR) and ultraviolet (UV) length scales. These truncations impose finite-size corrections on observables computed in this basis. We perform IR extrapolations of energies and radii computed in the NCSM and with the coupled-cluster method at several fixed UV cutoffs. It is shown that this strategy enables information gain also from data that is not fully UV converged. IR extrapolations improve the accuracy of relevant bound-state observables for a range of UV cutoffs, thus making them profitable tools. We relate the momentum scale that governs the exponential IR convergence to the threshold energy for the first open decay channel. Using large-scale NCSM calculations we numerically verify this small-momentum scale of finite nuclei.

  20. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  1. Modelling lateral beam quality variations in pencil kernel based photon dose calculations

    International Nuclear Information System (INIS)

    Nyholm, T; Olofsson, J; Ahnesjoe, A; Karlsson, M

    2006-01-01

    Standard treatment machines for external radiotherapy are designed to yield flat dose distributions at a representative treatment depth. The common method to reach this goal is to use a flattening filter to decrease the fluence in the centre of the beam. A side effect of this filtering is that the average energy of the beam is generally lower at a distance from the central axis, a phenomenon commonly referred to as off-axis softening. The off-axis softening results in a relative change in beam quality that is almost independent of machine brand and model. Central axis dose calculations using pencil beam kernels show no drastic loss in accuracy when the off-axis beam quality variations are neglected. However, for dose calculated at off-axis positions the effect should be considered, otherwise errors of several per cent can be introduced. This work proposes a method to explicitly include the effect of off-axis softening in pencil kernel based photon dose calculations for arbitrary positions in a radiation field. Variations of pencil kernel values are modelled through a generic relation between half value layer (HVL) thickness and off-axis position for standard treatment machines. The pencil kernel integration for dose calculation is performed through sampling of energy fluence and beam quality in sectors of concentric circles around the calculation point. The method is fully based on generic data and therefore does not require any specific measurements for characterization of the off-axis softening effect, provided that the machine performance is in agreement with the assumed HVL variations. The model is verified versus profile measurements at different depths and through a model self-consistency check, using the dose calculation model to estimate HVL values at off-axis positions. A comparison between calculated and measured profiles at different depths showed a maximum relative error of 4% without explicit modelling of off-axis softening. The maximum relative error

  2. Using Volunteer Computing to Study Some Features of Diagonal Latin Squares

    Science.gov (United States)

    Vatutin, Eduard; Zaikin, Oleg; Kochemazov, Stepan; Valyaev, Sergey

    2017-12-01

    In this research, the study concerns around several features of diagonal Latin squares (DLSs) of small order. Authors of the study suggest an algorithm for computing minimal and maximal numbers of transversals of DLSs. According to this algorithm, all DLSs of a particular order are generated, and for each square all its transversals and diagonal transversals are constructed. The algorithm was implemented and applied to DLSs of order at most 7 on a personal computer. The experiment for order 8 was performed in the volunteer computing project Gerasim@home. In addition, the problem of finding pairs of orthogonal DLSs of order 10 was considered and reduced to Boolean satisfiability problem. The obtained problem turned out to be very hard, therefore it was decomposed into a family of subproblems. In order to solve the problem, the volunteer computing project SAT@home was used. As a result, several dozen pairs of described kind were found.

  3. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov (United States)

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  4. A variational master equation approach to quantum dynamics with off-diagonal coupling in a sub-Ohmic environment

    Energy Technology Data Exchange (ETDEWEB)

    Sun, Ke-Wei [School of Science, Hangzhou Dianzi University, Hangzhou 310018 (China); Division of Materials Science, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798 (Singapore); Fujihashi, Yuta; Ishizaki, Akihito [Institute for Molecular Science, National Institutes of Natural Sciences, Okazaki 444-8585 (Japan); Zhao, Yang, E-mail: YZhao@ntu.edu.sg [Division of Materials Science, Nanyang Technological University, 50 Nanyang Avenue, Singapore 639798 (Singapore)

    2016-05-28

    A master equation approach based on an optimized polaron transformation is adopted for dynamics simulation with simultaneous diagonal and off-diagonal spin-boson coupling. Two types of bath spectral density functions are considered, the Ohmic and the sub-Ohmic. The off-diagonal coupling leads asymptotically to a thermal equilibrium with a nonzero population difference P{sub z}(t → ∞) ≠ 0, which implies localization of the system, and it also plays a role in restraining coherent dynamics for the sub-Ohmic case. Since the new method can extend to the stronger coupling regime, we can investigate the coherent-incoherent transition in the sub-Ohmic environment. Relevant phase diagrams are obtained for different temperatures. It is found that the sub-Ohmic environment allows coherent dynamics at a higher temperature than the Ohmic environment.

  5. Analysis of total hydrogen content in palm oil and palm kernel oil ...

    African Journals Online (AJOL)

    A fast and non-destructive technique based on thermal neutron moderation has been used for determining the total hydrogen content in two types of red palm oil (dzomi and amidze) and palm kernel oil produced by traditio-nal methods in Ghana. An equipment consisting of an 241Am-Be neutron source and 3He neutron ...

  6. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  7. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  8. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  9. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  10. Binding energies and chemical shifts of least bound core electron excitations in cubic Asub(N)Bsub(8-N) semiconductors

    International Nuclear Information System (INIS)

    Bechstedt, F.; Enderlein, R.; Wischnewski, R.

    1981-01-01

    Core electron binding energies Esup(B) with respect to the vacuum level and their chemical shifts are calculated for the least bound core levels of cations and anions of cubic Asub(N)Bsub(8-N) semiconductors. Starting from the HF-binding energy of the free atom absolute values of Esup(B) are obtained by adding core level shifts and relaxation energies. Core level shifts are calculated by means of an electrostatic model with ionic and bond charges according to Phillips' bond charge model. For the calculation of relaxation energies the linear dielectric theory of electronic polarization is applied. Valence and core electrons, and diagonal and non-diagonal screening are taken into account. The theoretical results for chemical shifts of binding energies are compared with experimental values from XPS-measurements corrected by work function data. Good agreement is obtained in all cases within the error limit of about one eV. Chemical and atomic trends of core level shifts, relaxation energies, and binding energies are discussed in terms of changes of atomic and solid state parameters. Chemical shifts and relaxation energies are predicted for various ternary Asub(N)Bsub(8-N) compounds. (author)

  11. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  12. On the performance of diagonal lattice space-time codes

    KAUST Repository

    Abediseid, Walid

    2013-11-01

    There has been tremendous work done on designing space-time codes for the quasi-static multiple-input multiple output (MIMO) channel. All the coding design up-to-date focuses on either high-performance, high rates, low complexity encoding and decoding, or targeting a combination of these criteria [1]-[9]. In this paper, we analyze in details the performance limits of diagonal lattice space-time codes under lattice decoding. We present both lower and upper bounds on the average decoding error probability. We first derive a new closed-form expression for the lower bound using the so-called sphere lower bound. This bound presents the ultimate performance limit a diagonal lattice space-time code can achieve at any signal-to-noise ratio (SNR). The upper bound is then derived using the union-bound which demonstrates how the average error probability can be minimized by maximizing the minimum product distance of the code. Combining both the lower and the upper bounds on the average error probability yields a simple upper bound on the the minimum product distance that any (complex) lattice code can achieve. At high-SNR regime, we discuss the outage performance of such codes and provide the achievable diversity-multiplexing tradeoff under lattice decoding. © 2013 IEEE.

  13. Significance of matrix diagonalization in modelling inelastic electron scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Z. [University of Ulm, Ulm 89081 (Germany); Hambach, R. [University of Ulm, Ulm 89081 (Germany); University of Jena, Jena 07743 (Germany); Kaiser, U.; Rose, H. [University of Ulm, Ulm 89081 (Germany)

    2017-04-15

    Electron scattering is always applied as one of the routines to investigate nanostructures. Nowadays the development of hardware offers more and more prospect for this technique. For example imaging nanostructures with inelastic scattered electrons may allow to produce component-sensitive images with atomic resolution. Modelling inelastic electron scattering is therefore essential for interpreting these images. The main obstacle to study inelastic scattering problem is its complexity. During inelastic scattering, incident electrons entangle with objects, and the description of this process involves a multidimensional array. Since the simulation usually involves fourdimensional Fourier transforms, the computation is highly inefficient. In this work we have offered one solution to handle the multidimensional problem. By transforming a high dimensional array into twodimensional array, we are able to perform matrix diagonalization and approximate the original multidimensional array with its twodimensional eigenvectors. Our procedure reduces the complicated multidimensional problem to a twodimensional problem. In addition, it minimizes the number of twodimensional problems. This method is very useful for studying multiple inelastic scattering. - Highlights: • 4D problems are involved in modelling inelastic electron scattering. • By means of matrix diagonalization, the 4D problems can be simplified as 2D problems. • The number of 2D problems is minimized by using this approach.

  14. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  15. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee

    2011-01-01

    The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application

  17. Optimizing the performance of streaming numerical kernels on the IBM Blue Gene/P PowerPC 450 processor

    KAUST Repository

    Malas, Tareq Majed Yasin

    2012-05-21

    Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution of partial differential equations, represents a challenge despite the regularity of memory access. Sophisticated optimization techniques are required to fully utilize the CPU. We propose a new method for constructing streaming numerical kernels using a high-level assembly synthesis and optimization framework. We describe an implementation of this method in Python targeting the IBM® Blue Gene®/P supercomputer\\'s PowerPC® 450 core. This paper details the high-level design, construction, simulation, verification, and analysis of these kernels utilizing a subset of the CPU\\'s instruction set. We demonstrate the effectiveness of our approach by implementing several three-dimensional stencil kernels over a variety of cached memory scenarios and analyzing the mechanically scheduled variants, including a 27-point stencil achieving a 1.7× speedup over the best previously published results. © The Author(s) 2012.

  18. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  19. Optimizing the performance of streaming numerical kernels on the IBM Blue Gene/P PowerPC 450 processor

    KAUST Repository

    Malas, Tareq Majed Yasin; Ahmadia, Aron; Brown, Jed; Gunnels, John A.; Keyes, David E.

    2012-01-01

    Several emerging petascale architectures use energy-efficient processors with vectorized computational units and in-order thread processing. On these architectures the sustained performance of streaming numerical kernels, ubiquitous in the solution

  20. Implications for global energy markets: implications for non-fossil energy sources

    International Nuclear Information System (INIS)

    Grubb, Michael

    1998-01-01

    This paper highlights the recent developments concerning non-fossil energy and examines the impact of the Kyoto Protocol on non-fossil energy sources, and the implications for non-fossil sources in the implementation of the Kyoto Protocol. The current contributions of fossil and non-fossil fuels to electricity production, prospects for expansion of the established non-fossil sources, new renewables in Europe to date, renewables in Europe to 2010, and policy integration in the EU are discussed. Charts illustrating the generating capacity of renewable energy plant in Britain (1992-1966), wind energy capacity in Europe (1990-2000), and projected renewable energy contributions in the EU (wind, small hydro, photovoltaic, biomass and geothermal) are provided. (UK)

  1. The dipole form of the gluon part of the BFKL kernel

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Grabovsky, A.V.; Papa, A.

    2007-01-01

    The dipole form of the gluon part of the color singlet BFKL kernel in the next-to-leading order (NLO) is obtained in the coordinate representation by direct transfer from the momentum representation, where the kernel was calculated before. With this paper the transformation of the NLO BFKL kernel to the dipole form, started a few months ago with the quark part of the kernel, is completed

  2. Relativistic density matrix in the diagonal momentum representation. Bose-gas

    International Nuclear Information System (INIS)

    Makhlin, A.N.; Sinyukov, Yu.M.

    1984-01-01

    The relativistic-invariance treatment of the ideal Bose-system arising from the diagonal momentum representation for the density matrix is developed. The average occupation members and their correlators for statistical systems in arbitrary inertial frames are found on the equal-time hypersurfaces. The relativistic partition function method for the calculation of thermodynamic properties of gases moving as a whole is constructed

  3. Relation between Feynman Cycles and Off-Diagonal Long-Range Order

    International Nuclear Information System (INIS)

    Ueltschi, Daniel

    2006-01-01

    The usual order parameter for Bose-Einstein condensation involves the off-diagonal correlation function of Penrose and Onsager, but an alternative is Feynman's notion of infinite cycles. We present a formula that relates both order parameters. We discuss its validity with the help of rigorous results and heuristic arguments. The conclusion is that infinite cycles do not always represent the Bose condensate

  4. Low energy neutron scattering for energy dependent cross sections. General considerations

    Energy Technology Data Exchange (ETDEWEB)

    Rothenstein, W; Dagan, R [Technion-Israel Inst. of Tech., Haifa (Israel). Dept. of Mechanical Engineering

    1996-12-01

    We consider in this paper some aspects related to neutron scattering at low energies by nuclei which are subject to thermal agitation. The scattering is determined by a temperature dependent joint scattering kernel, or the corresponding joint probability density, which is a function of two variables, the neutron energy after scattering, and the cosine of the angle of scattering, for a specified energy and direction of motion of the neutron, before the interaction takes place. This joint probability density is easy to calculate, when the nucleus which causes the scattering of the neutron is at rest. It can be expressed by a delta function, since there is a one to one correspondence between the neutron energy change, and the cosine of the scattering angle. If the thermal motion of the target nucleus is taken into account, the calculation is rather more complicated. The delta function relation between the cosine of the angle of scattering and the neutron energy change is now averaged over the spectrum of velocities of the target nucleus, and becomes a joint kernel depending on both these variables. This function has a simple form, if the target nucleus behaves as an ideal gas, which has a scattering cross section independent of energy. An energy dependent scattering cross section complicates the treatment further. An analytic expression is no longer obtained for the ideal gas temperature dependent joint scattering kernel as a function of the neutron energy after the interaction and the cosine of the scattering angle. Instead the kernel is expressed by an inverse Fourier Transform of a complex integrand, which is averaged over the velocity spectrum of the target nucleus. (Abstract Truncated)

  5. Nutritional value of high fiber co-products from the copra, palm kernel, and rice industries in diets fed to pigs.

    Science.gov (United States)

    Stein, Hans Henrik; Casas, Gloria Amparo; Abelilla, Jerubella Jerusalem; Liu, Yanhong; Sulabo, Rommel Casilda

    2015-01-01

    High fiber co-products from the copra and palm kernel industries are by-products of the production of coconut oil and palm kernel oil. The co-products include copra meal, copra expellers, palm kernel meal, and palm kernel expellers. All 4 ingredients are very high in fiber and the energy value is relatively low when fed to pigs. The protein concentration is between 14 and 22 % and the protein has a low biological value and a very high Arg:Lys ratio. Digestibility of most amino acids is less than in soybean meal but close to that in corn. However, the digestibility of Lys is sometimes low due to Maillard reactions that are initiated due to overheating during drying. Copra and palm kernel ingredients contain 0.5 to 0.6 % P. Most of the P in palm kernel meal and palm kernel expellers is bound to phytate, but in copra products less than one third of the P is bound to phytate. The digestibility of P is, therefore, greater in copra meal and copra expellers than in palm kernel ingredients. Inclusion of copra meal should be less than 15 % in diets fed to weanling pigs and less than 25 % in diets for growing-finishing pigs. Palm kernel meal may be included by 15 % in diets for weanling pigs and 25 % in diets for growing and finishing pigs. Rice bran contains the pericarp and aleurone layers of brown rice that is removed before polished rice is produced. Rice bran contains approximately 25 % neutral detergent fiber and 25 to 30 % starch. Rice bran has a greater concentration of P than most other plant ingredients, but 75 to 90 % of the P is bound in phytate. Inclusion of microbial phytase in the diets is, therefore, necessary if rice bran is used. Rice bran may contain 15 to 24 % fat, but it may also have been defatted in which case the fat concentration is less than 5 %. Concentrations of digestible energy (DE) and metabolizable energy (ME) are slightly less in full fat rice bran than in corn, but defatted rice bran contains less than 75 % of the DE and ME in

  6. A new discrete dipole kernel for quantitative susceptibility mapping.

    Science.gov (United States)

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    Directory of Open Access Journals (Sweden)

    Zhengbin Liu

    2016-08-01

    Full Text Available Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis. In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits.

  8. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  9. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    International Nuclear Information System (INIS)

    Khazaee, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine

  10. Scientific opinion on the acute health risks related to the presence of cyanogenic glycosides in raw apricot kernels and products derived from raw apricot kernels

    DEFF Research Database (Denmark)

    Petersen, Annette

    of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...

  11. Non-linear multivariate and multiscale monitoring and signal denoising strategy using Kernel Principal Component Analysis combined with Ensemble Empirical Mode Decomposition method

    Science.gov (United States)

    Žvokelj, Matej; Zupan, Samo; Prebil, Ivan

    2011-10-01

    The article presents a novel non-linear multivariate and multiscale statistical process monitoring and signal denoising method which combines the strengths of the Kernel Principal Component Analysis (KPCA) non-linear multivariate monitoring approach with the benefits of Ensemble Empirical Mode Decomposition (EEMD) to handle multiscale system dynamics. The proposed method which enables us to cope with complex even severe non-linear systems with a wide dynamic range was named the EEMD-based multiscale KPCA (EEMD-MSKPCA). The method is quite general in nature and could be used in different areas for various tasks even without any really deep understanding of the nature of the system under consideration. Its efficiency was first demonstrated by an illustrative example, after which the applicability for the task of bearing fault detection, diagnosis and signal denosing was tested on simulated as well as actual vibration and acoustic emission (AE) signals measured on purpose-built large-size low-speed bearing test stand. The positive results obtained indicate that the proposed EEMD-MSKPCA method provides a promising tool for tackling non-linear multiscale data which present a convolved picture of many events occupying different regions in the time-frequency plane.

  12. Periodic Anderson model with correlated conduction electrons: Variational and exact diagonalization study

    Science.gov (United States)

    Hagymási, I.; Itai, K.; Sólyom, J.

    2012-06-01

    We investigate an extended version of the periodic Anderson model (the so-called periodic Anderson-Hubbard model) with the aim to understand the role of interaction between conduction electrons in the formation of the heavy-fermion and mixed-valence states. Two methods are used: (i) variational calculation with the Gutzwiller wave function optimizing numerically the ground-state energy and (ii) exact diagonalization of the Hamiltonian for short chains. The f-level occupancy and the renormalization factor of the quasiparticles are calculated as a function of the energy of the f orbital for a wide range of the interaction parameters. The results obtained by the two methods are in reasonably good agreement for the periodic Anderson model. The agreement is maintained even when the interaction between band electrons, Ud, is taken into account, except for the half-filled case. This discrepancy can be explained by the difference between the physics of the one- and higher-dimensional models. We find that this interaction shifts and widens the energy range of the bare f level, where heavy-fermion behavior can be observed. For large-enough Ud this range may lie even above the bare conduction band. The Gutzwiller method indicates a robust transition from Kondo insulator to Mott insulator in the half-filled model, while Ud enhances the quasiparticle mass when the filling is close to half filling.

  13. Off-diagonal series expansion for quantum partition functions

    Science.gov (United States)

    Hen, Itay

    2018-05-01

    We derive an integral-free thermodynamic perturbation series expansion for quantum partition functions which enables an analytical term-by-term calculation of the series. The expansion is carried out around the partition function of the classical component of the Hamiltonian with the expansion parameter being the strength of the off-diagonal, or quantum, portion. To demonstrate the usefulness of the technique we analytically compute to third order the partition functions of the 1D Ising model with longitudinal and transverse fields, and the quantum 1D Heisenberg model.

  14. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  15. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    Science.gov (United States)

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  16. Background field removal using a region adaptive kernel for quantitative susceptibility mapping of human brain.

    Science.gov (United States)

    Fang, Jinsheng; Bao, Lijun; Li, Xu; van Zijl, Peter C M; Chen, Zhong

    2017-08-01

    Background field removal is an important MR phase preprocessing step for quantitative susceptibility mapping (QSM). It separates the local field induced by tissue magnetic susceptibility sources from the background field generated by sources outside a region of interest, e.g. brain, such as air-tissue interface. In the vicinity of air-tissue boundary, e.g. skull and paranasal sinuses, where large susceptibility variations exist, present background field removal methods are usually insufficient and these regions often need to be excluded by brain mask erosion at the expense of losing information of local field and thus susceptibility measures in these regions. In this paper, we propose an extension to the variable-kernel sophisticated harmonic artifact reduction for phase data (V-SHARP) background field removal method using a region adaptive kernel (R-SHARP), in which a scalable spherical Gaussian kernel (SGK) is employed with its kernel radius and weights adjustable according to an energy "functional" reflecting the magnitude of field variation. Such an energy functional is defined in terms of a contour and two fitting functions incorporating regularization terms, from which a curve evolution model in level set formation is derived for energy minimization. We utilize it to detect regions of with a large field gradient caused by strong susceptibility variation. In such regions, the SGK will have a small radius and high weight at the sphere center in a manner adaptive to the voxel energy of the field perturbation. Using the proposed method, the background field generated from external sources can be effectively removed to get a more accurate estimation of the local field and thus of the QSM dipole inversion to map local tissue susceptibility sources. Numerical simulation, phantom and in vivo human brain data demonstrate improved performance of R-SHARP compared to V-SHARP and RESHARP (regularization enabled SHARP) methods, even when the whole paranasal sinus regions

  17. Automatic plankton image classification combining multiple view features via multiple kernel learning.

    Science.gov (United States)

    Zheng, Haiyong; Wang, Ruchen; Yu, Zhibin; Wang, Nan; Gu, Zhaorui; Zheng, Bing

    2017-12-28

    Plankton, including phytoplankton and zooplankton, are the main source of food for organisms in the ocean and form the base of marine food chain. As the fundamental components of marine ecosystems, plankton is very sensitive to environment changes, and the study of plankton abundance and distribution is crucial, in order to understand environment changes and protect marine ecosystems. This study was carried out to develop an extensive applicable plankton classification system with high accuracy for the increasing number of various imaging devices. Literature shows that most plankton image classification systems were limited to only one specific imaging device and a relatively narrow taxonomic scope. The real practical system for automatic plankton classification is even non-existent and this study is partly to fill this gap. Inspired by the analysis of literature and development of technology, we focused on the requirements of practical application and proposed an automatic system for plankton image classification combining multiple view features via multiple kernel learning (MKL). For one thing, in order to describe the biomorphic characteristics of plankton more completely and comprehensively, we combined general features with robust features, especially by adding features like Inner-Distance Shape Context for morphological representation. For another, we divided all the features into different types from multiple views and feed them to multiple classifiers instead of only one by combining different kernel matrices computed from different types of features optimally via multiple kernel learning. Moreover, we also applied feature selection method to choose the optimal feature subsets from redundant features for satisfying different datasets from different imaging devices. We implemented our proposed classification system on three different datasets across more than 20 categories from phytoplankton to zooplankton. The experimental results validated that our system

  18. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  19. Emotion Recognition from Single-Trial EEG Based on Kernel Fisher’s Emotion Pattern and Imbalanced Quasiconformal Kernel Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-07-01

    Full Text Available Electroencephalogram-based emotion recognition (EEG-ER has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI. However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher’s discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher’s emotion pattern (KFEP, and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68% and arousal (84.79% among all testing methods.

  20. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  1. SU-E-T-209: Independent Dose Calculation in FFF Modulated Fields with Pencil Beam Kernels Obtained by Deconvolution

    International Nuclear Information System (INIS)

    Azcona, J; Burguete, J

    2014-01-01

    Purpose: To obtain the pencil beam kernels that characterize a megavoltage photon beam generated in a FFF linac by experimental measurements, and to apply them for dose calculation in modulated fields. Methods: Several Kodak EDR2 radiographic films were irradiated with a 10 MV FFF photon beam from a Varian True Beam (Varian Medical Systems, Palo Alto, CA) linac, at the depths of 5, 10, 15, and 20cm in polystyrene (RW3 water equivalent phantom, PTW Freiburg, Germany). The irradiation field was a 50 mm diameter circular field, collimated with a lead block. Measured dose leads to the kernel characterization, assuming that the energy fluence exiting the linac head and further collimated is originated on a point source. The three-dimensional kernel was obtained by deconvolution at each depth using the Hankel transform. A correction on the low dose part of the kernel was performed to reproduce accurately the experimental output factors. The kernels were used to calculate modulated dose distributions in six modulated fields and compared through the gamma index to their absolute dose measured by film in the RW3 phantom. Results: The resulting kernels properly characterize the global beam penumbra. The output factor-based correction was carried out adding the amount of signal necessary to reproduce the experimental output factor in steps of 2mm, starting at a radius of 4mm. There the kernel signal was in all cases below 10% of its maximum value. With this correction, the number of points that pass the gamma index criteria (3%, 3mm) in the modulated fields for all cases are at least 99.6% of the total number of points. Conclusion: A system for independent dose calculations in modulated fields from FFF beams has been developed. Pencil beam kernels were obtained and their ability to accurately calculate dose in homogeneous media was demonstrated

  2. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  3. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  4. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  5. Minimum Information Loss Based Multi-kernel Learning for Flagellar Protein Recognition in Trypanosoma Brucei

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-12-01

    Trypanosma brucei (T. Brucei) is an important pathogen agent of African trypanosomiasis. The flagellum is an essential and multifunctional organelle of T. Brucei, thus it is very important to recognize the flagellar proteins from T. Brucei proteins for the purposes of both biological research and drug design. In this paper, we investigate computationally recognizing flagellar proteins in T. Brucei by pattern recognition methods. It is argued that an optimal decision function can be obtained as the difference of probability functions of flagella protein and the non-flagellar protein for the purpose of flagella protein recognition. We propose to learn a multi-kernel classification function to approximate this optimal decision function, by minimizing the information loss of such approximation which is measured by the Kull back-Leibler (KL) divergence. An iterative multi-kernel classifier learning algorithm is developed to minimize the KL divergence for the problem of T. Brucei flagella protein recognition, experiments show its advantage over other T. Brucei flagellar protein recognition and multi-kernel learning methods. © 2014 IEEE.

  6. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Reply to the 'Comment on "Revisiting the definition of local hardness and hardness kernel"' by C. Morell, F. Guégan, W. Lamine, and H. Chermette, Phys. Chem. Chem. Phys., 2018, 20, DOI.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos A; Gázquez, José L; Ayers, Paul W

    2018-03-28

    This reply complements the comment of Guégan et al. about our recent work on the revision of the local hardness and the hardness kernel concepts. Guegan et al. analyze our work using a Taylor series expansion of the energy as a functional of the electron density, to show that our procedure opens a new way to define local descriptors. In this contribution we show that the strategy we followed for the local hardness and the hardness kernel is even more general, and that it can be used to derive from a global response function its corresponding local and non-local counterparts by: (1) requiring that the integral over one of the two variables that characterizes the non-local function leads to the local function, and that the integral over the local function leads to the global response index, and (2) assuming that the global and local functions are related through the electronic density, by making use of the chain rule for functional derivatives.

  8. Incorporating Non-energy Benefits into Energy Savings Performance Contracts

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, Peter; Goldman, Charles; Gilligan, Donald; Singer, Terry

    2012-06-01

    This paper evaluates the issue of non-energy benefits within the context of the U.S. energy services company (ESCO) industry?a growing industry comprised of companies that provide energy savings and other benefits to customers through the use of performance-based contracting. Recent analysis has found that ESCO projects in the public/institutional sector, especially at K-12 schools, are using performance-based contracting, at the behest of the customers, to partially -- but not fully -- offset substantial accumulated deferred maintenance needs (e.g., asbestos removal, wiring) and measures that have very long paybacks (roof replacement). This trend is affecting the traditional economic measures policymakers use to evaluate success on a benefit to cost basis. Moreover, the value of non-energy benefits which can offset some or all of the cost of the non-energy measures -- including operations and maintenance (O&M) savings, avoided capital costs, and tradable pollution emissions allowances-- are not always incorporated into a formal cost-effectiveness analysis of ESCO projects. Nonenergy benefits are clearly important to customers, but state and federal laws that govern the acceptance of these types of benefits for ESCO projects vary widely (i.e., 0-100percent of allowable savings can come from one or more non-energy categories). Clear and consistent guidance on what types of savings are recognized in Energy Savings agreements under performance contracts is necessary, particularly where customers are searching for deep energy efficiency gains in the building sector.

  9. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  10. The utilization of endopower β in commercial feed which contains palm kernel cake on performance of broiler chicken

    Science.gov (United States)

    Purba, S. S. A.; Tafsin, M.; Ginting, S. P.; Khairani, Y.

    2018-02-01

    Palm kernel cake is an agricultural waste that can be used as raw material in the preparation of poultry rations. The design used was Completely Randomized Design (CRD) with 5 treatments and 4 replications. Level endopower β used 0 % (R0), 0.02% (R1), 0.04% (R2) and 0.06% (R3). The results showed that R0a and R0b were significantly different from R3 in terms of diet consumption, body weight gain and the conversion ratio The utilization of endopower β in commercial diets containing palm kernel cake in broilers can increase body weight gain, feed consumption, improve feed use efficiency and even energy. It is concluded that utilization endpower β improve performances of broiler chicken fed by diet containing palm kernel cake.

  11. Wave functions, evolution equations and evolution kernels form light-ray operators of QCD

    International Nuclear Information System (INIS)

    Mueller, D.; Robaschik, D.; Geyer, B.; Dittes, F.M.; Horejsi, J.

    1994-01-01

    The widely used nonperturbative wave functions and distribution functions of QCD are determined as matrix elements of light-ray operators. These operators appear as large momentum limit of non-local hardron operators or as summed up local operators in light-cone expansions. Nonforward one-particle matrix elements of such operators lead to new distribution amplitudes describing both hadrons simultaneously. These distribution functions depend besides other variables on two scaling variables. They are applied for the description of exclusive virtual Compton scattering in the Bjorken region near forward direction and the two meson production process. The evolution equations for these distribution amplitudes are derived on the basis of the renormalization group equation of the considered operators. This includes that also the evolution kernels follow from the anomalous dimensions of these operators. Relations between different evolution kernels (especially the Altarelli-Parisi and the Brodsky-Lepage kernels) are derived and explicitly checked for the existing two-loop calculations of QCD. Technical basis of these resluts are support and analytically properties of the anomalous dimensions of light-ray operators obtained with the help of the α-representation of Green's functions. (orig.)

  12. Non-colliding Brownian Motions and the Extended Tacnode Process

    Science.gov (United States)

    Johansson, Kurt

    2013-04-01

    We consider non-colliding Brownian motions with two starting points and two endpoints. The points are chosen so that the two groups of Brownian motions just touch each other, a situation that is referred to as a tacnode. The extended kernel for the determinantal point process at the tacnode point is computed using new methods and given in a different form from that obtained for a single time in previous work by Delvaux, Kuijlaars and Zhang. The form of the extended kernel is also different from that obtained for the extended tacnode kernel in another model by Adler, Ferrari and van Moerbeke. We also obtain the correlation kernel for a finite number of non-colliding Brownian motions starting at two points and ending at arbitrary points.

  13. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  14. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  15. Equilibrium Transitions from Non Renewable Energy to Renewable Energy under Capacity Constraints

    OpenAIRE

    Amigues, Jean-Pierre; Ayong Le Kama, Alain; Moreaux, Michel

    2013-01-01

    We study the transition between non-renewable and renewable energy sources with adjustment costs over the production capacity of renewable energy. Assuming constant variable marginal costs for both energy sources, convex adjustment costs and a more expensive renewable energy, we show the following. With sufficiently abundant non-renewable energy endowments, the dynamic equilibrium path is composed of a first time phase of only non-renewable energy use followed by a transition phase substituti...

  16. Antinutritional factors and hypocholesterolemic effect of wild apricot kernel (Prunus armeniaca L.) as affected by detoxification.

    Science.gov (United States)

    Tanwar, Beenu; Modgil, Rajni; Goyal, Ankit

    2018-04-25

    The present investigation was aimed to study the effect of detoxification on the nutrients and antinutrients of wild apricot kernel followed by its hypocholesterolemic effect in male Wistar albino rats. The results revealed a non-significant (p > 0.05) effect of detoxification on the proximate composition except total carbohydrates and protein content. However, detoxification led to a significant (p acid (76.82%), β-carotene (25.90%), dietary fiber constituents (10.51-28.92%), minerals (4.76-31.08%) and antinutritional factors (23.92-77.05%) (phenolics, tannins, trypsin inhibitor activity, saponins, phytic acid, alkaloids, flavonoids, oxalates) along with the complete removal (100%) of bitter and potentially toxic hydrocyanic acid (HCN). The quality parameters of kernel oil indicated no adverse effects of detoxification on free fatty acids, lipase activity, acid value and peroxide value, which remained well below the maximum permissible limit. Blood lipid profile demonstrated that the detoxified apricot kernel group exhibited significantly (p < 0.05) increased levels of HDL-cholesterol (48.79%) and triglycerides (15.09%), and decreased levels of total blood cholesterol (6.99%), LDL-C (22.95%) and VLDL-C (7.90%) compared to that of the raw (untreated) kernel group. Overall, it can be concluded that wild apricot kernel flour could be detoxified efficiently by employing a simple, safe, domestic and cost-effective method, which further has the potential for formulating protein supplements and value-added food products.

  17. Optimal kernel shape and bandwidth for atomistic support of continuum stress

    International Nuclear Information System (INIS)

    Ulz, Manfred H; Moran, Sean J

    2013-01-01

    The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)

  18. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  19. Wave equation tomography using the unwrapped phase - Analysis of the traveltime sensitivity kernels

    KAUST Repository

    Djebbi, Ramzi

    2013-01-01

    Full waveform inversion suffers from the high non-linearity in the misfit function, which causes the convergence to a local minimum. In the other hand, traveltime tomography has a quasi-linear misfit function but yields low- resolution models. Wave equation tomography (WET) tries to improve on traveltime tomography, by better adhering to the requirements of our finite-frequency data. However, conventional (WET), based on the crosscorelaion lag, yields the popular hallow banana sensitivity kernel indicating that the measured wavefield at a point is insensitive to perturbations along the ray theoretical path at certain finite frequencies. Using the instantaneous traveltime, the sensitivity kernel reflects more the model-data dependency we grown accustom to in seismic inversion (even phase inversion). Demonstrations on synthetic and the Mamousi model support such assertions.

  20. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.