WorldWideScience

Sample records for single scattering kernel

  1. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  2. SCAP-82, Single Scattering, Albedo Scattering, Point-Kernel Analysis in Complex Geometry

    International Nuclear Information System (INIS)

    Disney, R.K.; Vogtman, S.E.

    1987-01-01

    1 - Description of problem or function: SCAP solves for radiation transport in complex geometries using the single or albedo scatter point kernel method. The program is designed to calculate the neutron or gamma ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user specified discrete scattering volume. Geometry is describable by zones bounded by intersecting quadratic surfaces within an arbitrary maximum number of boundary surfaces per zone. Anisotropic point sources are describable as pointwise energy dependent distributions of polar angles on a meridian; isotropic point sources may also be specified. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a build- up factor approximation to account for multiple scatter on the scat- ter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as a distribution of isotropic point sources, with un-collided line-of-sight attenuation and buildup calculated between each source point and the detector point. 2 - Method of solution: A point kernel method using an anisotropic or isotropic point source representation is used, line-of-sight material attenuation and inverse square spatial attenuation between the source point and scatter points and the scatter points and detector point is employed. A direct summation of individual point source results is obtained. 3 - Restrictions on the complexity of the problem: - The SCAP program is written in complete flexible dimensioning so that no restrictions are imposed on the number of energy groups or geometric zones. The geometric zone description is restricted to zones defined by boundary surfaces defined by the general quadratic equation or one of its degenerate forms. The only restriction in the program is that the total

  3. Method for calculating anisotropic neutron transport using scattering kernel without polynomial expansion

    International Nuclear Information System (INIS)

    Takahashi, Akito; Yamamoto, Junji; Ebisuya, Mituo; Sumita, Kenji

    1979-01-01

    A new method for calculating the anisotropic neutron transport is proposed for the angular spectral analysis of D-T fusion reactor neutronics. The method is based on the transport equation with new type of anisotropic scattering kernels formulated by a single function I sub(i) (μ', μ) instead of polynomial expansion, for instance, Legendre polynomials. In the calculation of angular flux spectra by using scattering kernels with the Legendre polynomial expansion, we often observe the oscillation with negative flux. But in principle this oscillation disappears by this new method. In this work, we discussed anisotropic scattering kernels of the elastic scattering and the inelastic scatterings which excite discrete energy levels. The other scatterings were included in isotropic scattering kernels. An approximation method, with use of the first collision source written by the I sub(i) (μ', μ) function, was introduced to attenuate the ''oscillations'' when we are obliged to use the scattering kernels with the Legendre polynomial expansion. Calculated results with this approximation showed remarkable improvement for the analysis of the angular flux spectra in a slab system of lithium metal with the D-T neutron source. (author)

  4. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  5. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    International Nuclear Information System (INIS)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays

  6. Thermal neutron scattering kernels for sapphire and silicon single crystals

    International Nuclear Information System (INIS)

    Cantargi, F.; Granada, J.R.; Mayer, R.E.

    2015-01-01

    Highlights: • Thermal cross section libraries for sapphire and silicon single crystals were generated. • Debye model was used to represent the vibrational frequency spectra to feed the NJOY code. • Sapphire total cross section was measured at Centro Atómico Bariloche. • Cross section libraries were validated with experimental data available. - Abstract: Sapphire and silicon are materials usually employed as filters in facilities with thermal neutron beams. Due to the lack of the corresponding thermal cross section libraries for those materials, necessary in calculations performed in order to optimize beams for specific applications, here we present the generation of new thermal neutron scattering kernels for those materials. The Debye model was used in both cases to represent the vibrational frequency spectra required to feed the NJOY nuclear data processing system in order to produce the corresponding libraries in ENDF and ACE format. These libraries were validated with available experimental data, some from the literature and others obtained at the pulsed neutron source at Centro Atómico Bariloche

  7. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    International Nuclear Information System (INIS)

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  8. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1982-10-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  9. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1983-01-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  10. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  11. Study on the scattering law and scattering kernel of hydrogen in zirconium hydride

    International Nuclear Information System (INIS)

    Jiang Xinbiao; Chen Wei; Chen Da; Yin Banghua; Xie Zhongsheng

    1999-01-01

    The nuclear analytical model of calculating scattering law and scattering kernel for the uranium zirconium hybrid reactor is described. In the light of the acoustic and optic model of zirconium hydride, its frequency distribution function f(ω) is given and the scattering law of hydrogen in zirconium hydride is obtained by GASKET. The scattering kernel σ l (E 0 →E) of hydrogen bound in zirconium hydride is provided by the SMP code in the standard WIMS cross section library. Along with this library, WIMS is used to calculate the thermal neutron energy spectrum of fuel cell. The results are satisfied

  12. Analytic scattering kernels for neutron thermalization studies

    International Nuclear Information System (INIS)

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  13. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  14. Scattering kernels and cross sections working group

    International Nuclear Information System (INIS)

    Russell, G.; MacFarlane, B.; Brun, T.

    1998-01-01

    Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics

  15. Proof of the formula for the ideal gas scattering kernel for nuclides with strongly energy dependent scattering cross sections

    International Nuclear Information System (INIS)

    Rothenstein, W.

    2004-01-01

    The current study is a sequel to a paper by Rothenstein and Dagan [Ann. Nucl. Energy 25 (1998) 209] where the ideal gas based kernel for scatterers with internal structure was introduced. This double differential kernel includes the neutron energy after scattering as well as the cosine of the scattering angle for isotopes with strong scattering resonances. A new mathematical formalism enables the inclusion of the new kernel in NJOY [MacFarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91 (LA-12740-m)]. Moreover the computational time of the new kernel is reduced significantly, feasible for practical application. The completeness of the new kernel is proven mathematically and demonstrated numerically. Modifications necessary to remove the existing inconsistency of the secondary energy distribution in NJOY are presented

  16. Scatter kernel estimation with an edge-spread function method for cone-beam computed tomography imaging

    International Nuclear Information System (INIS)

    Li Heng; Mohan, Radhe; Zhu, X Ronald

    2008-01-01

    The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.

  17. COMPUTATIONAL EFFICIENCY OF A MODIFIED SCATTERING KERNEL FOR FULL-COUPLED PHOTON-ELECTRON TRANSPORT PARALLEL COMPUTING WITH UNSTRUCTURED TETRAHEDRAL MESHES

    Directory of Open Access Journals (Sweden)

    JONG WOON KIM

    2014-04-01

    In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.

  18. ZZ THERMOS, Multigroup P0 to P5 Thermal Scattering Kernels from ENDF/B Scattering Law Data

    International Nuclear Information System (INIS)

    McCrosson, F.J.; Finch, D.R.

    1975-01-01

    1 - Description of problem or function: Number of groups: 30-group THERMOS thermal scattering kernels. Nuclides: Molecular H 2 O, Molecular D 2 O, Graphite, Polyethylene, Benzene, Zr bound in ZrHx, H bound in ZrHx, Beryllium-9, Beryllium Oxide, Uranium Dioxide. Origin: ENDF/B library. Weighting Spectrum: yes. These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code (NESC Abstract 368). To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tape library from which they may be retrieved easily for use in any 30-group THERMOS library. The contents of the tapes are as follows - (Material: ZA/Temperatures (degrees K)): Molecular H 2 O: 100.0/296, 350, 400, 450, 500, 600, Molecular D 2 O: 101.0/296, 350, 400, 450, 500, 600, Graphite: 6000.0/296, 400, 500, 600, 700, 800, Polyethylene: 205.0/296, 350 Benzene: 106.0/296, 350, 400, 450, 500, 600, Zr bound in ZrHx: 203.0/296, 400, 500, 600, 700, 800, H bound in ZrHx: 230.0/296, 400, 500, 600, 700, 800, Beryllium-9: 4009.0/296, 400, 500, 600, 700, 800, Beryllium Oxide: 200.0/296, 400, 500, 600, 700, 800, Uranium Dioxide: 207.0/296, 400, 500, 600, 700, 800 2 - Method of solution: Kernel generation is performed by direct integration of the thermal scattering law data to obtain the differential scattering cross sections for each Legendre order. The integral parameter calculation is done by precision integration of the diffusion length equation for several moderator absorption cross sections followed by a

  19. Absorption line profiles in a moving atmosphere - A single scattering linear perturbation theory

    Science.gov (United States)

    Hays, P. B.; Abreu, V. J.

    1989-01-01

    An integral equation is derived which linearly relates Doppler perturbations in the spectrum of atmospheric absorption features to the wind system which creates them. The perturbation theory is developed using a single scattering model, which is validated against a multiple scattering calculation. The nature and basic properties of the kernels in the integral equation are examined. It is concluded that the kernels are well behaved and that wind velocity profiles can be recovered using standard inversion techniques.

  20. Development of Cold Neutron Scattering Kernels for Advanced Moderators

    International Nuclear Information System (INIS)

    Granada, J. R.; Cantargi, F.

    2010-01-01

    The development of scattering kernels for a number of molecular systems was performed, including a set of hydrogeneous methylated aromatics such as toluene, mesitylene, and mixtures of those. In order to partially validate those new libraries, we compared predicted total cross sections with experimental data obtained in our laboratory. In addition, we have introduced a new model to describe the interaction of slow neutrons with solid methane in phase II (stable phase below T = 20.4 K, atmospheric pressure). Very recently, a new scattering kernel to describe the interaction of slow neutrons with solid Deuterium was also developed. The main dynamical characteristics of that system are contained in the formalism, the elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects.

  1. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  2. A Stochastic Proof of the Resonant Scattering Kernel and its Applications for Gen IV Reactors Type

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Broeders, C.H.M.; Lohnert, G.

    2008-01-01

    Monte Carlo codes such as MCNP are widely accepted as almost-reference for reactor analysis. The Monte Carlo Code should therefore use as few as possible approximations in order to produce 'experimental-level' calculations. In this study we deal with one of the most problematic approximations done in MCNP in which the resonances are ignored for the secondary neutron energy distribution, namely the change of the energy and angular direction of the neutron after interaction with a heavy isotope with pronounced resonances. The endeavour of exploiting the influence of the resonances on the scattering kernel goes back to 1944 where E. Wigner and J. Wilkins developed the first temperature dependent scattering kernel. However only in 1998, the full analytical solution for the double differential resonant dependent scattering kernel was suggested by W. Rothenstein and R. Dagan. An independent stochastic approach is presented for the first time to confirm the above analytical kernel with a complete different methodology. Moreover, by manipulating in a subtle manner the scattering subroutine COLIDN of MCNP, it is proven that this very subroutine is, to some extent, inappropriate as well as the relevant explanation in the MCNP manual. The impact of this improved resonance dependent scattering kernel on diverse types of reactors, in particular for the Generation IV innovative core design HTR, is shown to be significant. (authors)

  3. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  4. The slab albedo problem for the triplet scattering kernel with modified F{sub N} method

    Energy Technology Data Exchange (ETDEWEB)

    Tuereci, Demet [Ministry of Education, 75th year Anatolia High School, Ankara (Turkey)

    2016-12-15

    One speed, time independent neutron transport equation for a slab geometry with the quadratic anisotropic scattering kernel is considered. The albedo and transmission factor are calculated by the modified F{sub N} method. The obtained numerical results are listed for different scattering coefficients.

  5. Development of nondestructive screening methods for single kernel characterization of wheat

    DEFF Research Database (Denmark)

    Nielsen, J.P.; Pedersen, D.K.; Munck, L.

    2003-01-01

    predictability. However, by applying an averaging approach, in which single seed replicate measurements are mathematically simulated, a very good NIT prediction model was achieved. This suggests that the single seed NIT spectra contain hardness information, but that a single seed hardness method with higher......The development of nondestructive screening methods for single seed protein, vitreousness, density, and hardness index has been studied for single kernels of European wheat. A single kernel procedure was applied involving, image analysis, near-infrared transmittance (NIT) spectroscopy, laboratory...

  6. Topics in bound-state dynamical processes: semiclassical eigenvalues, reactive scattering kernels and gas-surface scattering models

    International Nuclear Information System (INIS)

    Adams, J.E.

    1979-05-01

    The difficulty of applying the WKB approximation to problems involving arbitrary potentials has been confronted. Recent work has produced a convenient expression for the potential correction term. However, this approach does not yield a unique correction term and hence cannot be used to construct the proper modification. An attempt is made to overcome the uniqueness difficulties by imposing a criterion which permits identification of the correct modification. Sections of this work are: semiclassical eigenvalues for potentials defined on a finite interval; reactive scattering exchange kernels; a unified model for elastic and inelastic scattering from a solid surface; and selective absorption on a solid surface

  7. Effect of the single-scattering phase function on light transmission through disordered media with large inhomogeneities

    International Nuclear Information System (INIS)

    Marinyuk, V V; Sheberstov, S V

    2017-01-01

    We calculate the total transmission coefficient (transmittance) of a disordered medium with large (compared to the light wavelength) inhomogeneities. To model highly forward scattering in the medium we take advantage of the Gegenbauer kernel phase function. In a subdiffusion thickness range, the transmittance is shown to be sensitive to the specific form of the single-scattering phase function. The effect reveals itself at grazing angles of incidence and originates from small-angle multiple scattering of light. Our results are in a good agreement with numerical solutions to the radiative transfer equation. (paper)

  8. Slab albedo for linearly and quadratically anisotropic scattering kernel with modified F{sub N} method

    Energy Technology Data Exchange (ETDEWEB)

    Tuereci, R. Goekhan [Kirikkale Univ. (Turkey). Kirikkale Vocational School; Tuereci, D. [Ministry of Education, Ankara (Turkey). 75th year Anatolia High School

    2017-11-15

    One speed, time independent and homogeneous medium neutron transport equation is solved with the anisotropic scattering which includes both the linearly and the quadratically anisotropic scattering kernel. Having written Case's eigenfunctions and the orthogonality relations among of these eigenfunctions, slab albedo problem is investigated as numerically by using Modified F{sub N} method. Selected numerical results are presented in tables.

  9. Multiple kernel learning using single stage function approximation for binary classification problems

    Science.gov (United States)

    Shiju, S.; Sumitra, S.

    2017-12-01

    In this paper, the multiple kernel learning (MKL) is formulated as a supervised classification problem. We dealt with binary classification data and hence the data modelling problem involves the computation of two decision boundaries of which one related with that of kernel learning and the other with that of input data. In our approach, they are found with the aid of a single cost function by constructing a global reproducing kernel Hilbert space (RKHS) as the direct sum of the RKHSs corresponding to the decision boundaries of kernel learning and input data and searching that function from the global RKHS, which can be represented as the direct sum of the decision boundaries under consideration. In our experimental analysis, the proposed model had shown superior performance in comparison with that of existing two stage function approximation formulation of MKL, where the decision functions of kernel learning and input data are found separately using two different cost functions. This is due to the fact that single stage representation helps the knowledge transfer between the computation procedures for finding the decision boundaries of kernel learning and input data, which inturn boosts the generalisation capacity of the model.

  10. Impact of the Improved Resonance Scattering Kernel on HTR Calculations

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Broeders, C.H.M.; Lohnert, G.

    2008-01-01

    The importance of an advanced neutron scattering model for heavy isotopes with strong energy dependent cross sections such as the pronounced resonances of U 238 has been discussed in various publications where the full double differential scattering kernel was derived. In this study we quantify the effect of the new scattering model for specific innovative types of High Temperature Reactor (HTR) systems which commonly exhibit a higher degree of heterogeneity and higher fuel temperatures, hence increasing the importance of the secondary neutron energy distribution. In particular the impact on the multiplication factor (k ∞ ) and the Doppler reactivity coefficient is presented in view of the packing factors and operating temperatures. A considerable reduction of k ∞ (up to 600 pcm) and an increased Doppler reactivity (up to 10%) is observed. An increase of up to 2.3% of the Pu 239 inventory can be noticed at 90 MWd/tHM burnup due to enhanced neutron absorption of U 238 . Those effects are more pronounced for design cases in which the neutron flux spectrum is hardened towards the resolved resonance range. (authors)

  11. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  12. A Calculation of the Angular Moments of the Kernel for a Monatomic Gas Scatterer

    Energy Technology Data Exchange (ETDEWEB)

    Haakansson, Rune

    1964-07-15

    B. Davison has given in an unpublished paper a method of calculating the moments of the monatomic gas scattering kernel. We present here this method and apply it to calculate the first four moments. Numerical results for these moments for the masses M = 1 and 3.6 are also given.

  13. Proof and implementation of the stochastic formula for ideal gas, energy dependent scattering kernel

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Lohnert, G.

    2009-01-01

    The ideal gas, scattering kernel for heavy nuclei with pronounced resonances was developed [Rothenstein, W., Dagan, R., 1998. Ann. Nucl. Energy 25, 209-222], proved and implemented [Rothenstein, W., 2004 Ann. Nucl. Energy 31, 9-23] in the data processing code NJOY [Macfarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91, LA-12740-M] from which the scattering probability tables were prepared [Dagan, R., 2005. Ann. Nucl. Energy 32, 367-377]. Those tables were introduced to the well known MCNP code [X-5 Monte Carlo Team. MCNP - A General Monte Carlo N-Particle Transport Code version 5 LA-UR-03-1987 code] via the 'mt' input cards in the same manner as it is done for light nuclei in the thermal energy range. In this study we present an alternative methodology for solving the double differential energy dependent scattering kernel which is based solely on stochastic consideration as far as the scattering probabilities are concerned. The solution scheme is based on an alternative rejection scheme suggested by Rothenstein [Rothenstein, W. ENS conference 1994 Tel Aviv]. Based on comparison with the above mentioned analytical (probability S(α,β)-tables) approach it is confirmed that the suggested rejection scheme provides accurate results. The uncertainty concerning the magnitude of the bias due to the enhanced multiple rejections during the sampling procedure are proved to lie within 1-2 standard deviations for all practical cases that were analysed.

  14. A scatter model for fast neutron beams using convolution of diffusion kernels

    International Nuclear Information System (INIS)

    Moyers, M.F.; Horton, J.L.; Boyer, A.L.

    1988-01-01

    A new model is proposed to calculate dose distributions in materials irradiated with fast neutron beams. Scattered neutrons are transported away from the point of production within the irradiated material in the forward, lateral and backward directions, while recoil protons are transported in the forward and lateral directions. The calculation of dose distributions, such as for radiotherapy planning, is accomplished by convolving a primary attenuation distribution with a diffusion kernel. The primary attenuation distribution may be quickly calculated for any given set of beam and material conditions as it describes only the magnitude and distribution of first interaction sites. The calculation of energy diffusion kernels is very time consuming but must be calculated only once for a given energy. Energy diffusion distributions shown in this paper have been calculated using a Monte Carlo type of program. To decrease beam calculation time, convolutions are performed using a Fast Fourier Transform technique. (author)

  15. Magnetic resonance imaging of single rice kernels during cooking

    NARCIS (Netherlands)

    Mohoric, A.; Vergeldt, F.J.; Gerkema, E.; Jager, de P.A.; Duynhoven, van J.P.M.; Dalen, van G.; As, van H.

    2004-01-01

    The RARE imaging method was used to monitor the cooking of single rice kernels in real time and with high spatial resolution in three dimensions. The imaging sequence is optimized for rapid acquisition of signals with short relaxation times using centered out RARE. Short scan time and high spatial

  16. Single corn kernel wide-line NMR oil analysis for breeding purpose

    Energy Technology Data Exchange (ETDEWEB)

    Wilmers, M C.C.; Rettori, C; Vargas, H; Barberis, G E [Universidade Estadual de Campinas (Brazil). Inst. de Fisica; da Silva, W J [Universidade Estadual de Campinas (Brazil). Inst. de Biologia

    1978-12-01

    The Wide-Line NMR technique was used to determine the oil content in single corn seeds. Using distinct radio frequency (RF) power, a systematic work was done in kernels with about 10% of moisture, and also in artificially dried seeds with approximated 5% of moisture. For nondried seeds NMR spectra showed clearly the presence of three resonances with different RF saturation factor. For dried seeds, the oil concentration determined by NMR was highly correlated (r = 0,997) with that determined by a gravimetric method. The highest discrepancy between the two methods was found to be about 1,3%. When relative measurements are required as in the case of single kernel for recurrent selection program, precision in the individual selected kernel will be about 2,5%. Applying this technique, a first cycle of recurrent selection using S/sub 1/ lines for low and high oil content was performed in an open pollinated variety. Gain from selection was 12.0 and 14.1% in the populations for high and low oil contents, respectively.

  17. Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures

    Science.gov (United States)

    Cantargi, F.; Granada, J. R.; Damián, J. I. Márquez

    2017-09-01

    Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6), is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K), but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.

  18. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  19. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  20. Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures

    Directory of Open Access Journals (Sweden)

    Cantargi F.

    2017-01-01

    Full Text Available Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6, is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K, but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.

  1. Sigma set scattering equations in nuclear reaction theory

    International Nuclear Information System (INIS)

    Kowalski, K.L.; Picklesimer, A.

    1982-01-01

    The practical applications of partially summed versions of the Rosenberg equations involving only special subsets (sigma sets) of the physical amplitudes are investigated with special attention to the Pauli principle. The requisite properties of the transformations from the pair labels to the set of partitions labeling the sigma set of asymptotic channels are established. New, well-defined, scattering integral equations for the antisymmetrized transition operators are found which possess much less coupling among the physically distinct channels than hitherto expected for equations with kernels of equal complexity. In several cases of physical interest in nuclear physics, a single connected-kernel equation is obtained for the relevant antisymmetrized elastic scattering amplitude

  2. Calculation of the Kernel scattering for thermal neutrons in H2O e D2O

    International Nuclear Information System (INIS)

    Leal, L.C.; Assis, J.T. de

    1981-01-01

    A computer code, using the Nelkin-and Butler models for the calculations of the Kernel scattering, was developed. Calculations of the thermal neutron flux in an homogeneous-and infinite medium with a 1 /v absorber in 30 energy groups were done and compared with experimental data. The reactors parameters calculated by the Hammer code (in the original version and with the new library generated by the authors' code) are presented. (E.G) [pt

  3. Cold moderator scattering kernels

    International Nuclear Information System (INIS)

    MacFarlane, R.E.

    1989-01-01

    New thermal-scattering-law files in ENDF format have been developed for solid methane, liquid methane liquid ortho- and para-hydrogen, and liquid ortho- and para-deuterium using up-to-date models that include such effects as incoherent elastic scattering in the solid, diffusion and hindered vibration and rotations in the liquids, and spin correlations for the hydrogen and deuterium. These files were generated with the new LEAPR module of the NJOY Nuclear Data Processing System. Other modules of this system were used to produce cross sections for these moderators in the correct format for the continuous-energy Monte Carlo code (MCNP) being used for cold-moderator-design calculations at the Los Alamos Neutron Scattering Center (LANSCE). 20 refs., 14 figs

  4. Emotion Recognition from Single-Trial EEG Based on Kernel Fisher’s Emotion Pattern and Imbalanced Quasiconformal Kernel Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-07-01

    Full Text Available Electroencephalogram-based emotion recognition (EEG-ER has received increasing attention in the fields of health care, affective computing, and brain-computer interface (BCI. However, satisfactory ER performance within a bi-dimensional and non-discrete emotional space using single-trial EEG data remains a challenging task. To address this issue, we propose a three-layer scheme for single-trial EEG-ER. In the first layer, a set of spectral powers of different EEG frequency bands are extracted from multi-channel single-trial EEG signals. In the second layer, the kernel Fisher’s discriminant analysis method is applied to further extract features with better discrimination ability from the EEG spectral powers. The feature vector produced by layer 2 is called a kernel Fisher’s emotion pattern (KFEP, and is sent into layer 3 for further classification where the proposed imbalanced quasiconformal kernel support vector machine (IQK-SVM serves as the emotion classifier. The outputs of the three layer EEG-ER system include labels of emotional valence and arousal. Furthermore, to collect effective training and testing datasets for the current EEG-ER system, we also use an emotion-induction paradigm in which a set of pictures selected from the International Affective Picture System (IAPS are employed as emotion induction stimuli. The performance of the proposed three-layer solution is compared with that of other EEG spectral power-based features and emotion classifiers. Results on 10 healthy participants indicate that the proposed KFEP feature performs better than other spectral power features, and IQK-SVM outperforms traditional SVM in terms of the EEG-ER accuracy. Our findings also show that the proposed EEG-ER scheme achieves the highest classification accuracies of valence (82.68% and arousal (84.79% among all testing methods.

  5. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  6. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  7. Assessing the measurement of aerosol single scattering albedo by Cavity Attenuated Phase-Shift Single Scattering Monitor (CAPS PMssa)

    Science.gov (United States)

    Perim de Faria, Julia; Bundke, Ulrich; Onasch, Timothy B.; Freedman, Andrew; Petzold, Andreas

    2016-04-01

    The necessity to quantify the direct impact of aerosol particles on climate forcing is already well known; assessing this impact requires continuous and systematic measurements of the aerosol optical properties. Two of the main parameters that need to be accurately measured are the aerosol optical depth and single scattering albedo (SSA, defined as the ratio of particulate scattering to extinction). The measurement of single scattering albedo commonly involves the measurement of two optical parameters, the scattering and the absorption coefficients. Although there are well established technologies to measure both of these parameters, the use of two separate instruments with different principles and uncertainties represents potential sources of significant errors and biases. Based on the recently developed cavity attenuated phase shift particle extinction monitor (CAPS PM_{ex) instrument, the CAPS PM_{ssa instrument combines the CAPS technology to measure particle extinction with an integrating sphere capable of simultaneously measuring the scattering coefficient of the same sample. The scattering channel is calibrated to the extinction channel, such that the accuracy of the single scattering albedo measurement is only a function of the accuracy of the extinction measurement and the nephelometer truncation losses. This gives the instrument an accurate and direct measurement of the single scattering albedo. In this study, we assess the measurements of both the extinction and scattering channels of the CAPS PM_{ssa through intercomparisons with Mie theory, as a fundamental comparison, and with proven technologies, such as integrating nephelometers and filter-based absorption monitors. For comparison, we use two nephelometers, a TSI 3563 and an Aurora 4000, and two measurements of the absorption coefficient, using a Particulate Soot Absorption Photometer (PSAP) and a Multi Angle Absorption Photometer (MAAP). We also assess the indirect absorption coefficient

  8. Introducing single-crystal scattering and optical potentials into MCNPX: Predicting neutron emission from a convoluted moderator

    Energy Technology Data Exchange (ETDEWEB)

    Gallmeier, F.X., E-mail: gallmeierfz@ornl.gov [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Iverson, E.B.; Lu, W. [Spallation Neutron Source, Oak Ridge National Laboratory, Oak Ridge, TN 37831 (United States); Baxter, D.V. [Center for the Exploration of Energy and Matter, Indiana University, Bloomington, IN 47408 (United States); Muhrer, G.; Ansell, S. [European Spallation Source, ESS AB, Lund (Sweden)

    2016-04-01

    Neutron transport simulation codes are indispensable tools for the design and construction of modern neutron scattering facilities and instrumentation. Recently, it has become increasingly clear that some neutron instrumentation has started to exploit physics that is not well-modeled by the existing codes. In particular, the transport of neutrons through single crystals and across interfaces in MCNP(X), Geant4, and other codes ignores scattering from oriented crystals and refractive effects, and yet these are essential phenomena for the performance of monochromators and ultra-cold neutron transport respectively (to mention but two examples). In light of these developments, we have extended the MCNPX code to include a single-crystal neutron scattering model and neutron reflection/refraction physics. We have also generated silicon scattering kernels for single crystals of definable orientation. As a first test of these new tools, we have chosen to model the recently developed convoluted moderator concept, in which a moderating material is interleaved with layers of perfect crystals to provide an exit path for neutrons moderated to energies below the crystal's Bragg cut–off from locations deep within the moderator. Studies of simple cylindrical convoluted moderator systems of 100 mm diameter and composed of polyethylene and single crystal silicon were performed with the upgraded MCNPX code and reproduced the magnitude of effects seen in experiments compared to homogeneous moderator systems. Applying different material properties for refraction and reflection, and by replacing the silicon in the models with voids, we show that the emission enhancements seen in recent experiments are primarily caused by the transparency of the silicon and void layers. Finally we simulated the convoluted moderator experiments described by Iverson et al. and found satisfactory agreement between the measurements and the simulations performed with the tools we have developed.

  9. Comparison of tungsten carbide and stainless steel ball bearings for grinding single maize kernels in a reciprocating grinder

    Science.gov (United States)

    Reciprocating grinders can grind single maize kernels by shaking the kernel in a vial with a ball bearing. This process results in a grind quality that is not satisfactory for many experiments. Tungesten carbide ball bearings are nearly twice as dense as steel, so we compared their grinding performa...

  10. Compton-scatter tissue densitometry: calculation of single and multiple scatter photon fluences

    International Nuclear Information System (INIS)

    Battista, J.J.; Bronskill, M.J.

    1978-01-01

    The accurate measurement of in vivo electron densities by the Compton-scatter method is limited by attenuations and multiple scattering in the patient. Using analytic and Monte Carlo calculation methods, the Clarke tissue density scanner has been modelled for incident monoenergetic photon energies from 300 to 2000 keV and for mean scattering angles of 30 to 130 degrees. For a single detector focussed to a central position in a uniform water phantom (25 x 25 x 25 cm 3 ) it has been demonstrated that: (1) Multiple scatter contamination is an inherent limitation of the Compton-scatter method of densitometry which can be minimised, but not eliminated, by improving the energy resolution of the scattered radiation detector. (2) The choice of the incident photon energy is a compromise between the permissible radiation dose to the patient and the tolerable level of multiple scatter contamination. For a mean scattering angle of 40 degrees, the intrinsic multiple-single scatter ratio decreases from 64 to 35%, and the radiation dose (per measurement) increases from 1.0 to 4.1 rad, as the incident photon energy increases from 300 to 2000 keV. These doses apply to a sampled volume of approximately 0.3 cm 3 and an electron density precision of 0.5%. (3) The forward scatter densitometer configuration is optimum, minimising both the dose and the multiple scatter contamination. For an incident photon energy of 1250 keV, the intrinsic multiple-single scatter ratio reduces from 122 to 27%, and the dose reduces from 14.3 to 1.2 rad, as the mean scattering angle decreases from 130 to 30 degrees. These calculations have been confirmed by experimental measurements. (author)

  11. Primary and scattering contributions to beta scaled dose point kernels by means of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Valente, Mauro; Botta, Francesca; Pedroli, Guido

    2012-01-01

    Beta-emitters have proved to be appropriate for radioimmunotherapy. The dosimetric characterization of each radionuclide has to be carefully investigated. One usual and practical dosimetric approach is the calculation of dose distribution from a unit point source emitting particles according to any radionuclide of interest, which is known as dose point kernel. Absorbed dose distributions are due to primary and radiation scattering contributions. This work presented a method capable of performing dose distributions for nuclear medicine dosimetry by means of Monte Carlo methods. Dedicated subroutines have been developed in order to separately compute primary and scattering contributions to the total absorbed dose, performing particle transport up to 1 keV or least. Preliminarily, the suitability of the calculation method has been satisfactory, being tested for monoenergetic sources, and it was further applied to the characterization of different beta-minus radionuclides of nuclear medicine interests for radioimmunotherapy. (author)

  12. The effect of STDP temporal kernel structure on the learning dynamics of single excitatory and inhibitory synapses.

    Directory of Open Access Journals (Sweden)

    Yotam Luz

    Full Text Available Spike-Timing Dependent Plasticity (STDP is characterized by a wide range of temporal kernels. However, much of the theoretical work has focused on a specific kernel - the "temporally asymmetric Hebbian" learning rules. Previous studies linked excitatory STDP to positive feedback that can account for the emergence of response selectivity. Inhibitory plasticity was associated with negative feedback that can balance the excitatory and inhibitory inputs. Here we study the possible computational role of the temporal structure of the STDP. We represent the STDP as a superposition of two processes: potentiation and depression. This allows us to model a wide range of experimentally observed STDP kernels, from Hebbian to anti-Hebbian, by varying a single parameter. We investigate STDP dynamics of a single excitatory or inhibitory synapse in purely feed-forward architecture. We derive a mean-field-Fokker-Planck dynamics for the synaptic weight and analyze the effect of STDP structure on the fixed points of the mean field dynamics. We find a phase transition along the Hebbian to anti-Hebbian parameter from a phase that is characterized by a unimodal distribution of the synaptic weight, in which the STDP dynamics is governed by negative feedback, to a phase with positive feedback characterized by a bimodal distribution. The critical point of this transition depends on general properties of the STDP dynamics and not on the fine details. Namely, the dynamics is affected by the pre-post correlations only via a single number that quantifies its overlap with the STDP kernel. We find that by manipulating the STDP temporal kernel, negative feedback can be induced in excitatory synapses and positive feedback in inhibitory. Moreover, there is an exact symmetry between inhibitory and excitatory plasticity, i.e., for every STDP rule of inhibitory synapse there exists an STDP rule for excitatory synapse, such that their dynamics is identical.

  13. A relationship between Gel'fand-Levitan and Marchenko kernels

    International Nuclear Information System (INIS)

    Kirst, T.; Von Geramb, H.V.; Amos, K.A.

    1989-01-01

    An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs

  14. Single-kernel analysis of fumonisins and other fungal metabolites in maize from South African subsistence farmers.

    Science.gov (United States)

    Mogensen, J M; Sørensen, S M; Sulyok, M; van der Westhuizen, L; Shephard, G S; Frisvad, J C; Thrane, U; Krska, R; Nielsen, K F

    2011-12-01

    Fumonisins are important Fusarium mycotoxins mainly found in maize and derived products. This study analysed maize from five subsistence farmers in the former Transkei region of South Africa. Farmers had sorted kernels into good and mouldy quality. A total of 400 kernels from 10 batches were analysed; of these 100 were visually characterised as uninfected and 300 as infected. Of the 400 kernels, 15% were contaminated with 1.84-1428 mg kg(-1) fumonisins, and 4% (n=15) had a fumonisin content above 100 mg kg(-1). None of the visually uninfected maize had detectable amounts of fumonisins. The total fumonisin concentration was 0.28-1.1 mg kg(-1) for good-quality batches and 0.03-6.2 mg kg(-1) for mouldy-quality batches. The high fumonisin content in the batches was apparently caused by a small number (4%) of highly contaminated kernels, and removal of these reduced the average fumonisin content by 71%. Of the 400 kernels, 80 were screened for 186 microbial metabolites by liquid chromatography-tandem mass spectrometry, detecting 17 other fungal metabolites, including fusaric acid, equisetin, fusaproliferin, beauvericin, cyclosporins, agroclavine, chanoclavine, rugulosin and emodin. Fusaric acid in samples without fumonisins indicated the possibility of using non-toxinogenic Fusaria as biocontrol agents to reduce fumonisin exposure, as done for Aspergillus flavus. This is the first report of mycotoxin profiling in single naturally infected maize kernels. © 2011 Taylor & Francis

  15. Evaluation of the Single-precision Floatingpoint Vector Add Kernel Using the Intel FPGA SDK for OpenCL

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Zheming [Argonne National Lab. (ANL), Argonne, IL (United States); Yoshii, Kazutomo [Argonne National Lab. (ANL), Argonne, IL (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Cappello, Franck [Argonne National Lab. (ANL), Argonne, IL (United States)

    2017-04-20

    Open Computing Language (OpenCL) is a high-level language that enables software programmers to explore Field Programmable Gate Arrays (FPGAs) for application acceleration. The Intel FPGA software development kit (SDK) for OpenCL allows a user to specify applications at a high level and explore the performance of low-level hardware acceleration. In this report, we present the FPGA performance and power consumption results of the single-precision floating-point vector add OpenCL kernel using the Intel FPGA SDK for OpenCL on the Nallatech 385A FPGA board. The board features an Arria 10 FPGA. We evaluate the FPGA implementations using the compute unit duplication and kernel vectorization optimization techniques. On the Nallatech 385A FPGA board, the maximum compute kernel bandwidth we achieve is 25.8 GB/s, approximately 76% of the peak memory bandwidth. The power consumption of the FPGA device when running the kernels ranges from 29W to 42W.

  16. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    International Nuclear Information System (INIS)

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  17. Higher-order predictions for splitting functions and coefficient functions from physical evolution kernels

    International Nuclear Information System (INIS)

    Vogt, A; Soar, G.; Vermaseren, J.A.M.

    2010-01-01

    We have studied the physical evolution kernels for nine non-singlet observables in deep-inelastic scattering (DIS), semi-inclusive e + e - annihilation and the Drell-Yan (DY) process, and for the flavour-singlet case of the photon- and heavy-top Higgs-exchange structure functions (F 2 , F φ ) in DIS. All known contributions to these kernels show an only single-logarithmic large-x enhancement at all powers of (1-x). Conjecturing that this behaviour persists to (all) higher orders, we have predicted the highest three (DY: two) double logarithms of the higher-order non-singlet coefficient functions and of the four-loop singlet splitting functions. The coefficient-function predictions can be written as exponentiations of 1/N-suppressed contributions in Mellin-N space which, however, are less predictive than the well-known exponentiation of the ln k N terms. (orig.)

  18. The single scattering properties of the aerosol particles as aggregated spheres

    International Nuclear Information System (INIS)

    Wu, Y.; Gu, X.; Cheng, T.; Xie, D.; Yu, T.; Chen, H.; Guo, J.

    2012-01-01

    The light scattering and absorption properties of anthropogenic aerosol particles such as soot aggregates are complicated in the temporal and spatial distribution, which introduce uncertainty of radiative forcing on global climate change. In order to study the single scattering properties of anthorpogenic aerosol particles, the structures of these aerosols such as soot paticles and soot-containing mixtures with the sulfate or organic matter, are simulated using the parallel diffusion limited aggregation algorithm (DLA) based on the transmission electron microscope images (TEM). Then, the single scattering properties of randomly oriented aerosols, such as scattering matrix, single scattering albedo (SSA), and asymmetry parameter (AP), are computed using the superposition T-matrix method. The comparisons of the single scattering properties of these specific types of clusters with different morphological and chemical factors such as fractal parameters, aspect ratio, monomer radius, mixture mode and refractive index, indicate that these different impact factors can respectively generate the significant influences on the single scattering properties of these aerosols. The results show that aspect ratio of circumscribed shape has relatively small effect on single scattering properties, for both differences of SSA and AP are less than 0.1. However, mixture modes of soot clusters with larger sulfate particles have remarkably important effects on the scattering and absorption properties of aggregated spheres, and SSA of those soot-containing mixtures are increased in proportion to the ratio of larger weakly absorbing attachments. Therefore, these complex aerosols come from man made pollution cannot be neglected in the aerosol retrievals. The study of the single scattering properties on these kinds of aggregated spheres is important and helpful in remote sensing observations and atmospheric radiation balance computations.

  19. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  20. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    In unsupervised classification, kernel -means clustering method has been shown to perform better than conventional -means clustering method in ... 518501, India; Department of Computer Science and Engineering, Jawaharlal Nehru Technological University, Anantapur College of Engineering, Anantapur 515002, India ...

  1. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  2. Kernel PLS Estimation of Single-trial Event-related Potentials

    Science.gov (United States)

    Rosipal, Roman; Trejo, Leonard J.

    2004-01-01

    Nonlinear kernel partial least squaes (KPLS) regressior, is a novel smoothing approach to nonparametric regression curve fitting. We have developed a KPLS approach to the estimation of single-trial event related potentials (ERPs). For improved accuracy of estimation, we also developed a local KPLS method for situations in which there exists prior knowledge about the approximate latency of individual ERP components. To assess the utility of the KPLS approach, we compared non-local KPLS and local KPLS smoothing with other nonparametric signal processing and smoothing methods. In particular, we examined wavelet denoising, smoothing splines, and localized smoothing splines. We applied these methods to the estimation of simulated mixtures of human ERPs and ongoing electroencephalogram (EEG) activity using a dipole simulator (BESA). In this scenario we considered ongoing EEG to represent spatially and temporally correlated noise added to the ERPs. This simulation provided a reasonable but simplified model of real-world ERP measurements. For estimation of the simulated single-trial ERPs, local KPLS provided a level of accuracy that was comparable with or better than the other methods. We also applied the local KPLS method to the estimation of human ERPs recorded in an experiment on co,onitive fatigue. For these data, the local KPLS method provided a clear improvement in visualization of single-trial ERPs as well as their averages. The local KPLS method may serve as a new alternative to the estimation of single-trial ERPs and improvement of ERP averages.

  3. An obstructive sleep apnea detection approach using kernel density classification based on single-lead electrocardiogram.

    Science.gov (United States)

    Chen, Lili; Zhang, Xi; Wang, Hui

    2015-05-01

    Obstructive sleep apnea (OSA) is a common sleep disorder that often remains undiagnosed, leading to an increased risk of developing cardiovascular diseases. Polysomnogram (PSG) is currently used as a golden standard for screening OSA. However, because it is time consuming, expensive and causes discomfort, alternative techniques based on a reduced set of physiological signals are proposed to solve this problem. This study proposes a convenient non-parametric kernel density-based approach for detection of OSA using single-lead electrocardiogram (ECG) recordings. Selected physiologically interpretable features are extracted from segmented RR intervals, which are obtained from ECG signals. These features are fed into the kernel density classifier to detect apnea event and bandwidths for density of each class (normal or apnea) are automatically chosen through an iterative bandwidth selection algorithm. To validate the proposed approach, RR intervals are extracted from ECG signals of 35 subjects obtained from a sleep apnea database ( http://physionet.org/cgi-bin/atm/ATM ). The results indicate that the kernel density classifier, with two features for apnea event detection, achieves a mean accuracy of 82.07 %, with mean sensitivity of 83.23 % and mean specificity of 80.24 %. Compared with other existing methods, the proposed kernel density approach achieves a comparably good performance but by using fewer features without significantly losing discriminant power, which indicates that it could be widely used for home-based screening or diagnosis of OSA.

  4. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    Energy Technology Data Exchange (ETDEWEB)

    Ghrayeb, S. Z. [Dept. of Mechanical and Nuclear Engineering, Pennsylvania State Univ., 230 Reber Building, Univ. Park, PA 16802 (United States); Ouisloumen, M. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States); Ougouag, A. M. [Idaho National Laboratory, MS-3860, PO Box 1625, Idaho Falls, ID 83415 (United States); Ivanov, K. N.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

  5. Modifying infrared scattering effects of single yeast cells with plasmonic metal mesh

    Science.gov (United States)

    Malone, Marvin A.; Prakash, Suraj; Heer, Joseph M.; Corwin, Lloyd D.; Cilwa, Katherine E.; Coe, James V.

    2010-11-01

    The scattering effects in the infrared (IR) spectra of single, isolated bread yeast cells (Saccharomyces cerevisiae) on a ZnSe substrate and in metal microchannels have been probed by Fourier transform infrared imaging microspectroscopy. Absolute extinction [(3.4±0.6)×10-7 cm2 at 3178 cm-1], scattering, and absorption cross sections for a single yeast cell and a vibrational absorption spectrum have been determined by comparing it to the scattering properties of single, isolated, latex microspheres (polystyrene, 5.0 μm in diameter) on ZnSe, which are well modeled by the Mie scattering theory. Single yeast cells were then placed into the holes of the IR plasmonic mesh, i.e., metal films with arrays of subwavelength holes, yielding "scatter-free" IR absorption spectra, which have undistorted vibrational lineshapes and a rising generic IR absorption baseline. Absolute extinction, scattering, and absorption spectral profiles were determined for a single, ellipsoidal yeast cell to characterize the interplay of these effects.

  6. Microscopic description of the collisions between nuclei. [Generator coordinate kernels

    Energy Technology Data Exchange (ETDEWEB)

    Canto, L F; Brink, D M [Oxford Univ. (UK). Dept. of Theoretical Physics

    1977-03-21

    The equivalence of the generator coordinate method and the resonating group method is used in the derivation of two new methods to describe the scattering of spin-zero fragments. Both these methods use generator coordinate kernels, but avoid the problem of calculating the generator coordinate weight function in the asymptotic region. The scattering of two ..cap alpha..-particles is studied as an illustration.

  7. Single particle analysis with a 3600 light scattering photometer

    International Nuclear Information System (INIS)

    Bartholdi, M.F.

    1979-06-01

    Light scattering by single spherical homogeneous particles in the diameter range 1 to 20 μm and relative refractive index 1.20 is measured. Particle size of narrowly dispersed populations is determined and a multi-modal dispersion of five components is completely analyzed. A 360 0 light scattering photometer for analysis of single particles has been designed and developed. A fluid stream containing single particles intersects a focused laser beam at the primary focal point of an ellipsoidal reflector ring. The light scattered at angles theta = 2.5 0 to 177.5 0 at phi = 0 0 and 180 0 is reflected onto a circular array of photodiodes. The ellipsoidal reflector is situated in a chamber filled with fluid matching that of the stream to minimize refracting and reflecting interfaces. The detector array consists of 60 photodiodes each subtending 3 0 in scattering angle on 6 0 centers around 360 0 . 32 measurements on individual particles can be acquired at rates of 500 particles per second. The intensity and angular distribution of light scattered by spherical particles are indicative of size and relative refractive index. Calculations, using Lorenz--Mie theory, of differential scattering patterns integrated over angle corresponding to the detector geometry determined the instrument response to particle size. From this the expected resolution and experimental procedures are determined.Ultimately, the photometer will be utilized for identification and discrimination of biological cells based on the sensitivity of light scattering to size, shape, refractive index differences, internal granularity, and other internal morphology. This study has demonstrated the utility of the photometer and indicates potential for application to light scattering studies of biological cells

  8. Dimensional feature weighting utilizing multiple kernel learning for single-channel talker location discrimination using the acoustic transfer function.

    Science.gov (United States)

    Takashima, Ryoichi; Takiguchi, Tetsuya; Ariki, Yasuo

    2013-02-01

    This paper presents a method for discriminating the location of the sound source (talker) using only a single microphone. In a previous work, the single-channel approach for discriminating the location of the sound source was discussed, where the acoustic transfer function from a user's position is estimated by using a hidden Markov model of clean speech in the cepstral domain. In this paper, each cepstral dimension of the acoustic transfer function is newly weighted, in order to obtain the cepstral dimensions having information that is useful for classifying the user's position. Then, this paper proposes a feature-weighting method for the cepstral parameter using multiple kernel learning, defining the base kernels for each cepstral dimension of the acoustic transfer function. The user's position is trained and classified by support vector machine. The effectiveness of this method has been confirmed by sound source (talker) localization experiments performed in different room environments.

  9. Low energy neutron scattering for energy dependent cross sections. General considerations

    Energy Technology Data Exchange (ETDEWEB)

    Rothenstein, W; Dagan, R [Technion-Israel Inst. of Tech., Haifa (Israel). Dept. of Mechanical Engineering

    1996-12-01

    We consider in this paper some aspects related to neutron scattering at low energies by nuclei which are subject to thermal agitation. The scattering is determined by a temperature dependent joint scattering kernel, or the corresponding joint probability density, which is a function of two variables, the neutron energy after scattering, and the cosine of the angle of scattering, for a specified energy and direction of motion of the neutron, before the interaction takes place. This joint probability density is easy to calculate, when the nucleus which causes the scattering of the neutron is at rest. It can be expressed by a delta function, since there is a one to one correspondence between the neutron energy change, and the cosine of the scattering angle. If the thermal motion of the target nucleus is taken into account, the calculation is rather more complicated. The delta function relation between the cosine of the angle of scattering and the neutron energy change is now averaged over the spectrum of velocities of the target nucleus, and becomes a joint kernel depending on both these variables. This function has a simple form, if the target nucleus behaves as an ideal gas, which has a scattering cross section independent of energy. An energy dependent scattering cross section complicates the treatment further. An analytic expression is no longer obtained for the ideal gas temperature dependent joint scattering kernel as a function of the neutron energy after the interaction and the cosine of the scattering angle. Instead the kernel is expressed by an inverse Fourier Transform of a complex integrand, which is averaged over the velocity spectrum of the target nucleus. (Abstract Truncated)

  10. New statistical model of inelastic fast neutron scattering

    International Nuclear Information System (INIS)

    Stancicj, V.

    1975-07-01

    A new statistical model for treating the fast neutron inelastic scattering has been proposed by using the general expressions of the double differential cross section in impuls approximation. The use of the Fermi-Dirac distribution of nucleons makes it possible to derive an analytical expression of the fast neutron inelastic scattering kernel including the angular momenta coupling. The obtained values of the inelastic fast neutron cross section calculated from the derived expression of the scattering kernel are in a good agreement with the experiments. A main advantage of the derived expressions is in their simplicity for the practical calculations

  11. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi

    2013-08-19

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it\\'s kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  12. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2013-01-01

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it's kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  13. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  14. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  15. Primary and scattering contributions to beta scaled dose point kernels by means of Monte Carlo simulations; Contribuicoes primaria e espalhada para dosimetria beta calculadas pelo dose point kernels empregando simulacoes pelo Metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Valente, Mauro [CONICET - Consejo Nacional de Investigaciones Cientificas y Tecnicas de La Republica Argentina (Conicet), Buenos Aires, AR (Brazil); Botta, Francesca; Pedroli, Guido [European Institute of Oncology, Milan (Italy). Medical Physics Department; Perez, Pedro, E-mail: valente@famaf.unc.edu.ar [Universidad Nacional de Cordoba, Cordoba (Argentina). Fac. de Matematica, Astronomia y Fisica (FaMAF)

    2012-07-01

    Beta-emitters have proved to be appropriate for radioimmunotherapy. The dosimetric characterization of each radionuclide has to be carefully investigated. One usual and practical dosimetric approach is the calculation of dose distribution from a unit point source emitting particles according to any radionuclide of interest, which is known as dose point kernel. Absorbed dose distributions are due to primary and radiation scattering contributions. This work presented a method capable of performing dose distributions for nuclear medicine dosimetry by means of Monte Carlo methods. Dedicated subroutines have been developed in order to separately compute primary and scattering contributions to the total absorbed dose, performing particle transport up to 1 keV or least. Preliminarily, the suitability of the calculation method has been satisfactory, being tested for monoenergetic sources, and it was further applied to the characterization of different beta-minus radionuclides of nuclear medicine interests for radioimmunotherapy. (author)

  16. Exploring abiotic stress on asynchronous protein metabolism in single kernels of wheat studied by NMR spectroscopy and chemometrics

    DEFF Research Database (Denmark)

    Winning, H.; Viereck, N.; Wollenweber, B.

    2009-01-01

    at the vegetative growth stage had little effect on the parameters investigated. For the first time, H-1 HR-MAS NMR spectra of grains taken during grain-filling were analysed by an advanced multiway model. In addition to the results from the chemical protein analysis and the H-1 HR-MAS NMR spectra of single kernels...... was to examine the implications of different drought treatments on the protein fractions in grains of winter wheat using H-1 nuclear magnetic resonance spectroscopy followed by chemometric analysis. Triticum aestivum L. cv. Vinjett was studied in a semi-field experiment and subjected to drought episodes either...... at terminal spikelet, during grain-filling or at both stages. Principal component trajectories of the total protein content and the protein fractions of flour as well as the H-1 NMR spectra of single wheat kernels, wheat flour, and wheat methanol extracts were analysed to elucidate the metabolic development...

  17. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  18. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    Science.gov (United States)

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  19. Seasonal variation of the single scattering albedo of the Jungfraujoch aerosol

    Energy Technology Data Exchange (ETDEWEB)

    Collaud Coen, M.; Weingartner, E.; Corrigan, C.; Baltensperger, U.

    2003-03-01

    The single scattering albedo ({omega}{sub 0}) represents the fraction of the light extinction due to scattering. It is there-fore a key parameter to estimate the aerosol direct radiative forcing. The seasonal and diurnal variation of the single scattering albedo was calculated for the Jungfraujoch dry aerosol, which is representative for clean remote continental conditions. The values of {omega}{sub 0} vary between 0.7 and 0.9 depending on the season and on the wavelength. (author)

  20. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  1. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints. This pa......Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints......). Second, a heterogeneous multi-core architecture is investigated, focusing on its performance in relation to hard real-time constraints and predictable behavior. Third, the hardware implementation of HARTEX is designated to support the heterogeneous multi-core architecture. This hardware kernel has...... several advantages over a similar kernel implemented in software: higher-speed processing capability, parallel computation, and separation between the kernel itself and the applications being run. A microbenchmark has been used to compare the hardware kernel with the software kernel, and compare...

  2. An algorithm to determine backscattering ratio and single scattering albedo

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Desa, E.; Matondkar, S.G.P.; Mascarenhas, A.A.M.Q.; Nayak, S.R.; Naik, P.

    Algorithms to determine the inherent optical properties of water, backscattering probability and single scattering albedo at 490 and 676 nm from the apparent optical property, remote sensing reflectance are presented here. The measured scattering...

  3. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  4. Occurrence of 'super soft' wheat kernel texture in hexaploid and tetraploid wheats

    Science.gov (United States)

    Wheat kernel texture is a key trait that governs milling performance, flour starch damage, flour particle size, flour hydration properties, and baking quality. Kernel texture is commonly measured using the Perten Single Kernel Characterization System (SKCS). The SKCS returns texture values (Hardness...

  5. Effective exchange potentials for electronically inelastic scattering

    International Nuclear Information System (INIS)

    Schwenke, D.W.; Staszewska, G.; Truhlar, D.G.

    1983-01-01

    We propose new methods for solving the electron scattering close coupling equations employing equivalent local exchange potentials in place of the continuum-multiconfiguration-Hartree--Fock-type exchange kernels. The local exchange potentials are Hermitian. They have the correct symmetry for any symmetries of excited electronic states included in the close coupling expansion, and they have the same limit at very high energy as previously employed exchange potentials. Comparison of numerical calculations employing the new exchange potentials with the results obtained with the standard nonlocal exchange kernels shows that the new exchange potentials are more accurate than the local exchange approximations previously available for electronically inelastic scattering. We anticipate that the new approximations will be most useful for intermediate-energy electronically inelastic electron--molecule scattering

  6. Single spin asymmetries in semi-inclusive deep inelastic scattering

    International Nuclear Information System (INIS)

    Mulders, P.J.

    1998-01-01

    In this talk I want to illustrate the many possibilities for studying the structure of hadrons in hard scattering processes by giving a number of examples involving increasing complexity in the demands for particle polarization, particle identification or polarimetry. In particular the single spin asymmetries will be discussed. The measurements discussed in this talk are restricted to lepton-hadron scattering, but can be found in various other hard processes such as Drell-Yan scattering or e + e - annihilation. (author)

  7. Quantum scattering at low energies

    DEFF Research Database (Denmark)

    Derezinski, Jan; Skibsted, Erik

    2009-01-01

    For a class of negative slowly decaying potentials, including V(x):=−γ|x|−μ with 0quantum mechanical scattering theory in the low-energy regime. Using appropriate modifiers of the Isozaki–Kitada type we show that scattering theory is well behaved on the whole continuous spectrum...... of the Hamiltonian, including the energy 0. We show that the modified scattering matrices S(λ) are well-defined and strongly continuous down to the zero energy threshold. Similarly, we prove that the modified wave matrices and generalized eigenfunctions are norm continuous down to the zero energy if we use...... of the kernel of S(λ) experiences an abrupt change from passing from positive energies λ to the limiting energy λ=0 . This change corresponds to the behaviour of the classical orbits. Under stronger conditions one can extract the leading term of the asymptotics of the kernel of S(λ) at its singularities....

  8. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  9. Neutron Transport in Finite Random Media with Pure-Triplet Scattering

    International Nuclear Information System (INIS)

    Sallaha, M.; Hendi, A.A.

    2008-01-01

    The solution of the one-speed neutron transport equation in a finite slab random medium with pure-triplet anisotropic scattering is studied. The stochastic medium is assumed to consist of two randomly mixed immiscible fluids. The cross section and the scattering kernel are treated as discrete random variables, which obey the same statistics as Markovian processes and exponential chord length statistics. The medium boundaries are considered to have specular reflectivities with angular-dependent externally incident flux. The deterministic solution is obtained by using Pomraning-Eddington approximation. Numerical results are calculated for the average reflectivity and average transmissivity for different values of the single scattering albedo and varying the parameters which characterize the random medium. Compared to the results obtained by Adams et al. in case of isotropic scattering that based on the Monte Carlo technique, it can be seen that we have good comparable data

  10. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  11. Snow particles extracted from X-ray computed microtomography imagery and their single-scattering properties

    Science.gov (United States)

    Ishimoto, Hiroshi; Adachi, Satoru; Yamaguchi, Satoru; Tanikawa, Tomonori; Aoki, Teruo; Masuda, Kazuhiko

    2018-04-01

    Sizes and shapes of snow particles were determined from X-ray computed microtomography (micro-CT) images, and their single-scattering properties were calculated at visible and near-infrared wavelengths using a Geometrical Optics Method (GOM). We analyzed seven snow samples including fresh and aged artificial snow and natural snow obtained from field samples. Individual snow particles were numerically extracted, and the shape of each snow particle was defined by applying a rendering method. The size distribution and specific surface area distribution were estimated from the geometrical properties of the snow particles, and an effective particle radius was derived for each snow sample. The GOM calculations at wavelengths of 0.532 and 1.242 μm revealed that the realistic snow particles had similar scattering phase functions as those of previously modeled irregular shaped particles. Furthermore, distinct dendritic particles had a characteristic scattering phase function and asymmetry factor. The single-scattering properties of particles of effective radius reff were compared with the size-averaged single-scattering properties. We found that the particles of reff could be used as representative particles for calculating the average single-scattering properties of the snow. Furthermore, the single-scattering properties of the micro-CT particles were compared to those of particle shape models using our current snow retrieval algorithm. For the single-scattering phase function, the results of the micro-CT particles were consistent with those of a conceptual two-shape model. However, the particle size dependence differed for the single-scattering albedo and asymmetry factor.

  12. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Multiple kernel SVR based on the MRE for remote sensing water depth fusion detection

    Science.gov (United States)

    Wang, Jinjin; Ma, Yi; Zhang, Jingyu

    2018-03-01

    Remote sensing has an important means of water depth detection in coastal shallow waters and reefs. Support vector regression (SVR) is a machine learning method which is widely used in data regression. In this paper, SVR is used to remote sensing multispectral bathymetry. Aiming at the problem that the single-kernel SVR method has a large error in shallow water depth inversion, the mean relative error (MRE) of different water depth is retrieved as a decision fusion factor with single kernel SVR method, a multi kernel SVR fusion method based on the MRE is put forward. And taking the North Island of the Xisha Islands in China as an experimentation area, the comparison experiments with the single kernel SVR method and the traditional multi-bands bathymetric method are carried out. The results show that: 1) In range of 0 to 25 meters, the mean absolute error(MAE)of the multi kernel SVR fusion method is 1.5m,the MRE is 13.2%; 2) Compared to the 4 single kernel SVR method, the MRE of the fusion method reduced 1.2% (1.9%) 3.4% (1.8%), and compared to traditional multi-bands method, the MRE reduced 1.9%; 3) In 0-5m depth section, compared to the single kernel method and the multi-bands method, the MRE of fusion method reduced 13.5% to 44.4%, and the distribution of points is more concentrated relative to y=x.

  14. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  15. Four-particle scattering with three-particle interactions

    International Nuclear Information System (INIS)

    Adhikari, S.K.

    1979-01-01

    The four-particle scattering formalism proposed independently by Alessandrini, by Mitra et al., by Rosenberg, and by Takahashi and Mishima is extended to include a possible three-particle interaction. The kernel of the new equations we get contain both two- and three-body connected parts and gets four-body connected after one iteration. On the other hand, the kernel of the original equations in the absence of three-particle interactions does not have a two-body connected part. We also write scattering equations for the transition operators connecting the two-body fragmentation channels. They are generalization of the Sloan equations in the presence of three-particle interactions. We indicate how to include approximately the effect of a weak three-particle interaction in a practical four-particle scattering calculation

  16. Reflectance of Biological Turbid Tissues under Wide Area Illumination: Single Backward Scattering Approach

    Directory of Open Access Journals (Sweden)

    Guennadi Saiko

    2014-01-01

    Full Text Available Various scenarios of light propagation paths in turbid media (single backward scattering, multiple backward scattering, banana shape are discussed and their contributions to reflectance spectra are estimated. It has been found that a single backward or multiple forward scattering quasi-1D paths can be the major contributors to reflected spectra in wide area illumination scenario. Such a single backward scattering (SBS approximation allows developing of an analytical approach which can take into account refractive index mismatched boundary conditions and multilayer geometry and can be used for real-time spectral processing. The SBS approach can be potentially applied for the distances between the transport and reduced scattering domains. Its validation versus the Kubelka-Munk model, path integrals, and diffusion approximation of the radiation transport theory is discussed.

  17. A Hierarchical Volumetric Shadow Algorithm for Single Scattering

    OpenAIRE

    Baran, Ilya; Chen, Jiawen; Ragan-Kelley, Jonathan Millar; Durand, Fredo; Lehtinen, Jaakko

    2010-01-01

    Volumetric effects such as beams of light through participating media are an important component in the appearance of the natural world. Many such effects can be faithfully modeled by a single scattering medium. In the presence of shadows, rendering these effects can be prohibitively expensive: current algorithms are based on ray marching, i.e., integrating the illumination scattered towards the camera along each view ray, modulated by visibility to the light source at each sample. Visibility...

  18. Elastic scattering of electrons from singly ionized argon

    International Nuclear Information System (INIS)

    Griffin, D.C.; Pindzola, M.S.

    1996-01-01

    Recently, Greenwood et al. [Phys. Rev. Lett. 75, 1062 (1995)] reported measurements of large-angle elastic scattering of electrons from singly ionized argon at an energy of 3.3 eV. They compared their results for the differential cross section with cross sections determined using phase shifts obtained from two different scattering potentials and found large discrepancies between theory and experiment at large angles. They state that these differences may be due to the effects of polarization of the target, which are not included in their calculations, as well as inaccurate representations of electron exchange in the local scattering potentials that are employed to determine the phase shifts. In order to test these proposed explanations of the discrepancies, we have carried out calculations of elastic scattering from Ar + using the R-matrix method. We compare both a single-state calculation, which does not include polarization, and a 17-state calculation, in which the effects of dipole polarizability are included through the use of polarization pseudostates within the close-coupling expansion, to each other and with the measurements. We find some differences between the two calculations at intermediate scattering angles, but very close agreement at angles above 100 degree. Although the calculated cross sections agree with experiment between 120 degree and 135 degree, large discrepancies persist at angles above 135 degree. We conclude that the differences between the measurements and theory cannot be explained on the basis of an inaccurate representation of electron exchange or polarization of the target. copyright 1996 The American Physical Society

  19. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Al-Harthi, Noha A.; Chen, Rui; Yokota, Rio; Bagci, Hakan; Keyes, David E.

    2018-01-01

    scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory

  20. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali

    2017-06-06

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar\\'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.

  1. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali; Keyes, David E.; Ltaief, Hatem

    2017-01-01

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.

  2. Application of Van Hove theory to fast neutron inelastic scattering

    International Nuclear Information System (INIS)

    Stanicicj, V.

    1974-11-01

    The Vane Hove general theory of the double differential scattering cross section has been used to derive the particular expressions of the inelastic fast neutrons scattering kernel and scattering cross section. Since the considered energies of incoming neutrons being less than 10 MeV, it enables to use the Fermi gas model of nucleons. In this case it was easy to derive an analytical expression for the time-dependent correlation function of the nucleus. Further, by using an impulse approximation and a short-collision time approach, it was possible to derive the analytical expression of the scattering kernel and scattering cross section for the fast neutron inelastic scattering. The obtained expressions have been used for Fe nucleus. It has been shown a surprising agreement with the experiments. The main advantage of this theory is in its simplicity for some practical calculations and for some theoretical investigations of nuclear processes

  3. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  4. Determining Complex Structures using Docking Method with Single Particle Scattering Data

    Directory of Open Access Journals (Sweden)

    Haiguang Liu

    2017-04-01

    Full Text Available Protein complexes are critical for many molecular functions. Due to intrinsic flexibility and dynamics of complexes, their structures are more difficult to determine using conventional experimental methods, in contrast to individual subunits. One of the major challenges is the crystallization of protein complexes. Using X-ray free electron lasers (XFELs, it is possible to collect scattering signals from non-crystalline protein complexes, but data interpretation is more difficult because of unknown orientations. Here, we propose a hybrid approach to determine protein complex structures by combining XFEL single particle scattering data with computational docking methods. Using simulations data, we demonstrate that a small set of single particle scattering data collected at random orientations can be used to distinguish the native complex structure from the decoys generated using docking algorithms. The results also indicate that a small set of single particle scattering data is superior to spherically averaged intensity profile in distinguishing complex structures. Given the fact that XFEL experimental data are difficult to acquire and at low abundance, this hybrid approach should find wide applications in data interpretations.

  5. Multi-parameter Analysis and Inversion for Anisotropic Media Using the Scattering Integral Method

    KAUST Repository

    Djebbi, Ramzi

    2017-10-24

    The main goal in seismic exploration is to identify locations of hydrocarbons reservoirs and give insights on where to drill new wells. Therefore, estimating an Earth model that represents the right physics of the Earth\\'s subsurface is crucial in identifying these targets. Recent seismic data, with long offsets and wide azimuth features, are more sensitive to anisotropy. Accordingly, multiple anisotropic parameters need to be extracted from the recorded data on the surface to properly describe the model. I study the prospect of applying a scattering integral approach for multi-parameter inversion for a transversely isotropic model with a vertical axis of symmetry. I mainly analyze the sensitivity kernels to understand the sensitivity of seismic data to anisotropy parameters. Then, I use a frequency domain scattering integral approach to invert for the optimal parameterization. The scattering integral approach is based on the explicit computation of the sensitivity kernels. I present a new method to compute the traveltime sensitivity kernels for wave equation tomography using the unwrapped phase. I show that the new kernels are a better alternative to conventional cross-correlation/Rytov kernels. I also derive and analyze the sensitivity kernels for a transversely isotropic model with a vertical axis of symmetry. The kernels structure, for various opening/scattering angles, highlights the trade-off regions between the parameters. For a surface recorded data, I show that the normal move-out velocity vn, ƞ and δ parameterization is suitable for a simultaneous inversion of diving waves and reflections. Moreover, when seismic data is inverted hierarchically, the horizontal velocity vh, ƞ and ϵ is the parameterization with the least trade-off. In the frequency domain, the hierarchical inversion approach is naturally implemented using frequency continuation, which makes vh, ƞ and ϵ parameterization attractive. I formulate the multi-parameter inversion using the

  6. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    Science.gov (United States)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  7. Combined Kernel-Based BDT-SMO Classification of Hyperspectral Fused Images

    Directory of Open Access Journals (Sweden)

    Fenghua Huang

    2014-01-01

    Full Text Available To solve the poor generalization and flexibility problems that single kernel SVM classifiers have while classifying combined spectral and spatial features, this paper proposed a solution to improve the classification accuracy and efficiency of hyperspectral fused images: (1 different radial basis kernel functions (RBFs are employed for spectral and textural features, and a new combined radial basis kernel function (CRBF is proposed by combining them in a weighted manner; (2 the binary decision tree-based multiclass SMO (BDT-SMO is used in the classification of hyperspectral fused images; (3 experiments are carried out, where the single radial basis function- (SRBF- based BDT-SMO classifier and the CRBF-based BDT-SMO classifier are used, respectively, to classify the land usages of hyperspectral fused images, and genetic algorithms (GA are used to optimize the kernel parameters of the classifiers. The results show that, compared with SRBF, CRBF-based BDT-SMO classifiers display greater classification accuracy and efficiency.

  8. A Novel Extreme Learning Machine Classification Model for e-Nose Application Based on the Multiple Kernel Approach.

    Science.gov (United States)

    Jian, Yulin; Huang, Daoyu; Yan, Jia; Lu, Kun; Huang, Ying; Wen, Tailai; Zeng, Tanyue; Zhong, Shijie; Xie, Qilong

    2017-06-19

    A novel classification model, named the quantum-behaved particle swarm optimization (QPSO)-based weighted multiple kernel extreme learning machine (QWMK-ELM), is proposed in this paper. Experimental validation is carried out with two different electronic nose (e-nose) datasets. Being different from the existing multiple kernel extreme learning machine (MK-ELM) algorithms, the combination coefficients of base kernels are regarded as external parameters of single-hidden layer feedforward neural networks (SLFNs). The combination coefficients of base kernels, the model parameters of each base kernel, and the regularization parameter are optimized by QPSO simultaneously before implementing the kernel extreme learning machine (KELM) with the composite kernel function. Four types of common single kernel functions (Gaussian kernel, polynomial kernel, sigmoid kernel, and wavelet kernel) are utilized to constitute different composite kernel functions. Moreover, the method is also compared with other existing classification methods: extreme learning machine (ELM), kernel extreme learning machine (KELM), k-nearest neighbors (KNN), support vector machine (SVM), multi-layer perceptron (MLP), radical basis function neural network (RBFNN), and probabilistic neural network (PNN). The results have demonstrated that the proposed QWMK-ELM outperforms the aforementioned methods, not only in precision, but also in efficiency for gas classification.

  9. Microwave single-scattering properties of randomly oriented soft-ice hydrometeors

    Directory of Open Access Journals (Sweden)

    D. Casella

    2008-11-01

    Full Text Available Large ice hydrometeors are usually present in intense convective clouds and may significantly affect the upwelling radiances that are measured by satellite-borne microwave radiometers – especially, at millimeter-wavelength frequencies. Thus, interpretation of these measurements (e.g., for precipitation retrieval requires knowledge of the single scattering properties of ice particles. On the other hand, shape and internal structure of these particles (especially, the larger ones is very complex and variable, and therefore it is necessary to resort to simplifying assumptions in order to compute their single-scattering parameters.

    In this study, we use the discrete dipole approximation (DDA to compute the absorption and scattering efficiencies and the asymmetry factor of two kinds of quasi-spherical and non-homogeneous soft-ice particles in the frequency range 50–183 GHz. Particles of the first kind are modeled as quasi-spherical ice particles having randomly distributed spherical air inclusions. Particles of the second kind are modeled as random aggregates of ice spheres having random radii. In both cases, particle densities and dimensions are coherent with the snow hydrometeor category that is utilized by the University of Wisconsin – Non-hydrostatic Modeling System (UW-NMS cloud-mesoscale model. Then, we compare our single-scattering results for randomly-oriented soft-ice hydrometeors with corresponding ones that make use of: a effective-medium equivalent spheres, b solid-ice equivalent spheres, and c randomly-oriented aggregates of ice cylinders. Finally, we extend to our particles the scattering formulas that have been developed by other authors for randomly-oriented aggregates of ice cylinders.

  10. Validation of Born Traveltime Kernels

    Science.gov (United States)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  11. Collinear limits beyond the leading order from the scattering equations

    Energy Technology Data Exchange (ETDEWEB)

    Nandan, Dhritiman; Plefka, Jan; Wormsbecher, Wadim [Institut für Physik and IRIS Adlershof, Humboldt-Universität zu Berlin,Zum Großen Windkanal 6, D-12489 Berlin (Germany)

    2017-02-08

    The structure of tree-level scattering amplitudes for collinear massless bosons is studied beyond their leading splitting function behavior. These near-collinear limits at sub-leading order are best studied using the Cachazo-He-Yuan (CHY) formulation of the S-matrix based on the scattering equations. We compute the collinear limits for gluons, gravitons and scalars. It is shown that the CHY integrand for an n-particle gluon scattering amplitude in the collinear limit at sub-leading order is expressed as a convolution of an (n−1)-particle gluon integrand and a collinear kernel integrand, which is universal. Our representation is shown to obey recently proposed amplitude relations in which the collinear gluons of same helicity are replaced by a single graviton. Finally, we extend our analysis to effective field theories and study the collinear limit of the non-linear sigma model, Einstein-Maxwell-Scalar and Yang-Mills-Scalar theory.

  12. A new kernel discriminant analysis framework for electronic nose recognition

    International Nuclear Information System (INIS)

    Zhang, Lei; Tian, Feng-Chun

    2014-01-01

    Graphical abstract: - Highlights: • This paper proposes a new discriminant analysis framework for feature extraction and recognition. • The principle of the proposed NDA is derived mathematically. • The NDA framework is coupled with kernel PCA for classification. • The proposed KNDA is compared with state of the art e-Nose recognition methods. • The proposed KNDA shows the best performance in e-Nose experiments. - Abstract: Electronic nose (e-Nose) technology based on metal oxide semiconductor gas sensor array is widely studied for detection of gas components. This paper proposes a new discriminant analysis framework (NDA) for dimension reduction and e-Nose recognition. In a NDA, the between-class and the within-class Laplacian scatter matrix are designed from sample to sample, respectively, to characterize the between-class separability and the within-class compactness by seeking for discriminant matrix to simultaneously maximize the between-class Laplacian scatter and minimize the within-class Laplacian scatter. In terms of the linear separability in high dimensional kernel mapping space and the dimension reduction of principal component analysis (PCA), an effective kernel PCA plus NDA method (KNDA) is proposed for rapid detection of gas mixture components by an e-Nose. The NDA framework is derived in this paper as well as the specific implementations of the proposed KNDA method in training and recognition process. The KNDA is examined on the e-Nose datasets of six kinds of gas components, and compared with state of the art e-Nose classification methods. Experimental results demonstrate that the proposed KNDA method shows the best performance with average recognition rate and total recognition rate as 94.14% and 95.06% which leads to a promising feature extraction and multi-class recognition in e-Nose

  13. Scatter measurement and correction method for cone-beam CT based on single grating scan

    Science.gov (United States)

    Huang, Kuidong; Shi, Wenlong; Wang, Xinyu; Dong, Yin; Chang, Taoqi; Zhang, Hua; Zhang, Dinghua

    2017-06-01

    In cone-beam computed tomography (CBCT) systems based on flat-panel detector imaging, the presence of scatter significantly reduces the quality of slices. Based on the concept of collimation, this paper presents a scatter measurement and correction method based on single grating scan. First, according to the characteristics of CBCT imaging, the scan method using single grating and the design requirements of the grating are analyzed and figured out. Second, by analyzing the composition of object projection images and object-and-grating projection images, the processing method for the scatter image at single projection angle is proposed. In addition, to avoid additional scan, this paper proposes an angle interpolation method of scatter images to reduce scan cost. Finally, the experimental results show that the scatter images obtained by this method are accurate and reliable, and the effect of scatter correction is obvious. When the additional object-and-grating projection images are collected and interpolated at intervals of 30 deg, the scatter correction error of slices can still be controlled within 3%.

  14. A full-angle Monte-Carlo scattering technique including cumulative and single-event Rutherford scattering in plasmas

    Science.gov (United States)

    Higginson, Drew P.

    2017-11-01

    We describe and justify a full-angle scattering (FAS) method to faithfully reproduce the accumulated differential angular Rutherford scattering probability distribution function (pdf) of particles in a plasma. The FAS method splits the scattering events into two regions. At small angles it is described by cumulative scattering events resulting, via the central limit theorem, in a Gaussian-like pdf; at larger angles it is described by single-event scatters and retains a pdf that follows the form of the Rutherford differential cross-section. The FAS method is verified using discrete Monte-Carlo scattering simulations run at small timesteps to include each individual scattering event. We identify the FAS regime of interest as where the ratio of temporal/spatial scale-of-interest to slowing-down time/length is from 10-3 to 0.3-0.7; the upper limit corresponds to Coulomb logarithm of 20-2, respectively. Two test problems, high-velocity interpenetrating plasma flows and keV-temperature ion equilibration, are used to highlight systems where including FAS is important to capture relevant physics.

  15. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints....... This paper presents a multi-core architecture incorporating a hardware kernel on FPGAs, intended for high performance applications in control engineering domain. First, the hardware kernel is investigated on the basis of a component-based real-time kernel HARTEX (Hard Real-Time Executive for Control Systems...

  16. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  17. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  18. Single and multiple electromagnetic scattering by dielectric obstacles from a resonance perspective

    International Nuclear Information System (INIS)

    Riley, D.J.

    1987-03-01

    A new application of the singularity expansion method (SEM) is explored. This application combines the classical theory of wave propagation through a multiple-scattering environment and the SEM. Because the SEM is generally considered to be a theory for describing surface currents on conducting scatters, extensions are made which permit, under certain conditions, a singularity expansion representation for the electromagnetic field scattered by a dielectric scatterer. Application of this expansion is then made to the multiple-scattering case using both single and multiple interactions. A resonance scattering tensor form is used for the SEM description which leds to an associated tensor form for the solution to the multiple-scattering problem with each SEM pole effect appearing explicitly. The coherent field is determined for both spatial and SEM parameter random variations. A numerical example for the case of an ensemble of dielectric spheres which possess frequency-dependent loss is also made. Accurate resonance expansions for the single-scattering problem are derived, and resonance trajectories based on the Debye relaxation model for the refractive index are introduced. Application of these resonance expansions is then made to the multiple-scattering results for a slab containing a distribution of spheres with varying radii. Conditions are discussed which describe when the hybrid theory is appropriate. 53 refs., 21 figs., 9 tabs

  19. Batched Triangular Dense Linear Algebra Kernels for Very Small Matrix Sizes on GPUs

    KAUST Repository

    Charara, Ali; Keyes, David E.; Ltaief, Hatem

    2017-01-01

    Batched dense linear algebra kernels are becoming ubiquitous in scientific applications, ranging from tensor contractions in deep learning to data compression in hierarchical low-rank matrix approximation. Within a single API call, these kernels are capable of simultaneously launching up to thousands of similar matrix computations, removing the expensive overhead of multiple API calls while increasing the occupancy of the underlying hardware. A challenge is that for the existing hardware landscape (x86, GPUs, etc.), only a subset of the required batched operations is implemented by the vendors, with limited support for very small problem sizes. We describe the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes using single and multiple GPUs. By deploying two-sided recursive formulations, stressing the register usage, maintaining data locality, reducing threads synchronization and fusing successive kernel calls, the new batched kernels outperform existing state-of-the-art implementations.

  20. Batched Triangular Dense Linear Algebra Kernels for Very Small Matrix Sizes on GPUs

    KAUST Repository

    Charara, Ali

    2017-03-06

    Batched dense linear algebra kernels are becoming ubiquitous in scientific applications, ranging from tensor contractions in deep learning to data compression in hierarchical low-rank matrix approximation. Within a single API call, these kernels are capable of simultaneously launching up to thousands of similar matrix computations, removing the expensive overhead of multiple API calls while increasing the occupancy of the underlying hardware. A challenge is that for the existing hardware landscape (x86, GPUs, etc.), only a subset of the required batched operations is implemented by the vendors, with limited support for very small problem sizes. We describe the design and performance of a new class of batched triangular dense linear algebra kernels on very small data sizes using single and multiple GPUs. By deploying two-sided recursive formulations, stressing the register usage, maintaining data locality, reducing threads synchronization and fusing successive kernel calls, the new batched kernels outperform existing state-of-the-art implementations.

  1. Single-Kernel FT-NIR Spectroscopy for Detecting Supersweet Corn (Zea mays L. Saccharata Sturt Seed Viability with Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Guangjun Qiu

    2018-03-01

    Full Text Available The viability and vigor of crop seeds are crucial indicators for evaluating seed quality, and high-quality seeds can increase agricultural yield. The conventional methods for assessing seed viability are time consuming, destructive, and labor intensive. Therefore, a rapid and nondestructive technique for testing seed viability has great potential benefits for agriculture. In this study, single-kernel Fourier transform near-infrared (FT-NIR spectroscopy with a wavelength range of 1000–2500 nm was used to distinguish viable and nonviable supersweet corn seeds. Various preprocessing algorithms coupled with partial least squares discriminant analysis (PLS-DA were implemented to test the performance of classification models. The FT-NIR spectroscopy technique successfully differentiated viable seeds from seeds that were nonviable due to overheating or artificial aging. Correct classification rates for both heat-damaged kernels and artificially aged kernels reached 98.0%. The comprehensive model could also attain an accuracy of 98.7% when combining heat-damaged samples and artificially aged samples into one category. Overall, the FT-NIR technique with multivariate data analysis methods showed great potential capacity in rapidly and nondestructively detecting seed viability in supersweet corn.

  2. Single-Kernel FT-NIR Spectroscopy for Detecting Supersweet Corn (Zea mays L. Saccharata Sturt) Seed Viability with Multivariate Data Analysis.

    Science.gov (United States)

    Qiu, Guangjun; Lü, Enli; Lu, Huazhong; Xu, Sai; Zeng, Fanguo; Shui, Qin

    2018-03-28

    The viability and vigor of crop seeds are crucial indicators for evaluating seed quality, and high-quality seeds can increase agricultural yield. The conventional methods for assessing seed viability are time consuming, destructive, and labor intensive. Therefore, a rapid and nondestructive technique for testing seed viability has great potential benefits for agriculture. In this study, single-kernel Fourier transform near-infrared (FT-NIR) spectroscopy with a wavelength range of 1000-2500 nm was used to distinguish viable and nonviable supersweet corn seeds. Various preprocessing algorithms coupled with partial least squares discriminant analysis (PLS-DA) were implemented to test the performance of classification models. The FT-NIR spectroscopy technique successfully differentiated viable seeds from seeds that were nonviable due to overheating or artificial aging. Correct classification rates for both heat-damaged kernels and artificially aged kernels reached 98.0%. The comprehensive model could also attain an accuracy of 98.7% when combining heat-damaged samples and artificially aged samples into one category. Overall, the FT-NIR technique with multivariate data analysis methods showed great potential capacity in rapidly and nondestructively detecting seed viability in supersweet corn.

  3. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  4. Racing to learn: statistical inference and learning in a single spiking neuron with adaptive kernels.

    Science.gov (United States)

    Afshar, Saeed; George, Libin; Tapson, Jonathan; van Schaik, André; Hamilton, Tara J

    2014-01-01

    This paper describes the Synapto-dendritic Kernel Adapting Neuron (SKAN), a simple spiking neuron model that performs statistical inference and unsupervised learning of spatiotemporal spike patterns. SKAN is the first proposed neuron model to investigate the effects of dynamic synapto-dendritic kernels and demonstrate their computational power even at the single neuron scale. The rule-set defining the neuron is simple: there are no complex mathematical operations such as normalization, exponentiation or even multiplication. The functionalities of SKAN emerge from the real-time interaction of simple additive and binary processes. Like a biological neuron, SKAN is robust to signal and parameter noise, and can utilize both in its operations. At the network scale neurons are locked in a race with each other with the fastest neuron to spike effectively "hiding" its learnt pattern from its neighbors. The robustness to noise, high speed, and simple building blocks not only make SKAN an interesting neuron model in computational neuroscience, but also make it ideal for implementation in digital and analog neuromorphic systems which is demonstrated through an implementation in a Field Programmable Gate Array (FPGA). Matlab, Python, and Verilog implementations of SKAN are available at: http://www.uws.edu.au/bioelectronics_neuroscience/bens/reproducible_research.

  5. Optimal numerical methods for determining the orientation averages of single-scattering properties of atmospheric ice crystals

    International Nuclear Information System (INIS)

    Um, Junshik; McFarquhar, Greg M.

    2013-01-01

    The optimal orientation averaging scheme (regular lattice grid scheme or quasi Monte Carlo (QMC) method), the minimum number of orientations, and the corresponding computing time required to calculate the average single-scattering properties (i.e., asymmetry parameter (g), single-scattering albedo (ω o ), extinction efficiency (Q ext ), scattering efficiency (Q sca ), absorption efficiency (Q abs ), and scattering phase function at scattering angles of 90° (P 11 (90°)), and 180° (P 11 (180°))) within a predefined accuracy level (i.e., 1.0%) were determined for four different nonspherical atmospheric ice crystal models (Gaussian random sphere, droxtal, budding Bucky ball, and column) with maximum dimension D=10μm using the Amsterdam discrete dipole approximation at λ=0.55, 3.78, and 11.0μm. The QMC required fewer orientations and less computing time than the lattice grid. The calculations of P 11 (90°) and P 11 (180°) required more orientations than the calculations of integrated scattering properties (i.e., g, ω o , Q ext , Q sca , and Q abs ) regardless of the orientation average scheme. The fewest orientations were required for calculating g and ω o . The minimum number of orientations and the corresponding computing time for single-scattering calculations decreased with an increase of wavelength, whereas they increased with the surface-area ratio that defines particle nonsphericity. -- Highlights: •The number of orientations required to calculate the average single-scattering properties of nonspherical ice crystals is investigated. •Single-scattering properties of ice crystals are calculated using ADDA. •Quasi Monte Carlo method is more efficient than lattice grid method for scattering calculations. •Single-scattering properties of ice crystals depend on a newly defined parameter called surface area ratio

  6. Temporary electron localization and scattering in disordered single strands of DNA

    International Nuclear Information System (INIS)

    Caron, Laurent; Sanche, Leon

    2006-01-01

    We present a theoretical study of the effect of structural and base sequence disorders on the transport properties of nonthermal electron scattering within and from single strands of DNA. The calculations are based on our recently developed formalism to treat multiple elastic scattering from simplified pseudomolecular DNA subunits. Structural disorder is shown to increase both the elastic scattering cross section and the attachment probability on the bases at low energy. Sequence disorder, however, has no significant effect

  7. Plane-dependent ML scatter scaling: 3D extension of the 2D simulated single scatter (SSS) estimate

    Science.gov (United States)

    Rezaei, Ahmadreza; Salvo, Koen; Vahle, Thomas; Panin, Vladimir; Casey, Michael; Boada, Fernando; Defrise, Michel; Nuyts, Johan

    2017-08-01

    Scatter correction is typically done using a simulation of the single scatter, which is then scaled to account for multiple scatters and other possible model mismatches. This scaling factor is determined by fitting the simulated scatter sinogram to the measured sinogram, using only counts measured along LORs that do not intersect the patient body, i.e. ‘scatter-tails’. Extending previous work, we propose to scale the scatter with a plane dependent factor, which is determined as an additional unknown in the maximum likelihood (ML) reconstructions, using counts in the entire sinogram rather than only the ‘scatter-tails’. The ML-scaled scatter estimates are validated using a Monte-Carlo simulation of a NEMA-like phantom, a phantom scan with typical contrast ratios of a 68Ga-PSMA scan, and 23 whole-body 18F-FDG patient scans. On average, we observe a 12.2% change in the total amount of tracer activity of the MLEM reconstructions of our whole-body patient database when the proposed ML scatter scales are used. Furthermore, reconstructions using the ML-scaled scatter estimates are found to eliminate the typical ‘halo’ artifacts that are often observed in the vicinity of high focal uptake regions.

  8. Formalism for neutron cross section covariances in the resonance region using kernel approximation

    Energy Technology Data Exchange (ETDEWEB)

    Oblozinsky, P.; Cho,Y-S.; Matoon,C.M.; Mughabghab,S.F.

    2010-04-09

    We describe analytical formalism for estimating neutron radiative capture and elastic scattering cross section covariances in the resolved resonance region. We use capture and scattering kernels as the starting point and show how to get average cross sections in broader energy bins, derive analytical expressions for cross section sensitivities, and deduce cross section covariances from the resonance parameter uncertainties in the recently published Atlas of Neutron Resonances. The formalism elucidates the role of resonance parameter correlations which become important if several strong resonances are located in one energy group. Importance of potential scattering uncertainty as well as correlation between potential scattering and resonance scattering is also examined. Practical application of the formalism is illustrated on {sup 55}Mn(n,{gamma}) and {sup 55}Mn(n,el).

  9. Quasi-Dual-Packed-Kerneled Au49 (2,4-DMBT)27 Nanoclusters and the Influence of Kernel Packing on the Electrochemical Gap.

    Science.gov (United States)

    Liao, Lingwen; Zhuang, Shengli; Wang, Pu; Xu, Yanan; Yan, Nan; Dong, Hongwei; Wang, Chengming; Zhao, Yan; Xia, Nan; Li, Jin; Deng, Haiteng; Pei, Yong; Tian, Shi-Kai; Wu, Zhikun

    2017-10-02

    Although face-centered cubic (fcc), body-centered cubic (bcc), hexagonal close-packed (hcp), and other structured gold nanoclusters have been reported, it was unclear whether gold nanoclusters with mix-packed (fcc and non-fcc) kernels exist, and the correlation between kernel packing and the properties of gold nanoclusters is unknown. A Au 49 (2,4-DMBT) 27 nanocluster with a shell electron count of 22 has now been been synthesized and structurally resolved by single-crystal X-ray crystallography, which revealed that Au 49 (2,4-DMBT) 27 contains a unique Au 34 kernel consisting of one quasi-fcc-structured Au 21 and one non-fcc-structured Au 13 unit (where 2,4-DMBTH=2,4-dimethylbenzenethiol). Further experiments revealed that the kernel packing greatly influences the electrochemical gap (EG) and the fcc structure has a larger EG than the investigated non-fcc structure. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    Science.gov (United States)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-01-01

    Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120

  11. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    Science.gov (United States)

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  12. Generation of gamma-ray streaming kernels through cylindrical ducts via Monte Carlo method

    International Nuclear Information System (INIS)

    Kim, Dong Su

    1992-02-01

    Since radiation streaming through penetrations is often the critical consideration in protection against exposure of personnel in a nuclear facility, it has been of great concern in radiation shielding design and analysis. Several methods have been developed and applied to the analysis of the radiation streaming in the past such as ray analysis method, single scattering method, albedo method, and Monte Carlo method. But they may be used for order-of-magnitude calculations and where sufficient margin is available, except for the Monte Carlo method which is accurate but requires a lot of computing time. This study developed a Monte Carlo method and constructed a data library of solutions using the Monte Carlo method for radiation streaming through a straight cylindrical duct in concrete walls of a broad, mono-directional, monoenergetic gamma-ray beam of unit intensity. The solution named as plane streaming kernel is the average dose rate at duct outlet and was evaluated for 20 source energies from 0 to 10 MeV, 36 source incident angles from 0 to 70 degrees, 5 duct radii from 10 to 30 cm, and 16 wall thicknesses from 0 to 100 cm. It was demonstrated that average dose rate due to an isotropic point source at arbitrary positions can be well approximated using the plane streaming kernel with acceptable error. Thus, the library of the plane streaming kernels can be used for the accurate and efficient analysis of radiation streaming through a straight cylindrical duct in concrete walls due to arbitrary distributions of gamma-ray sources

  13. A Visual Approach to Investigating Shared and Global Memory Behavior of CUDA Kernels

    KAUST Repository

    Rosen, Paul

    2013-01-01

    We present an approach to investigate the memory behavior of a parallel kernel executing on thousands of threads simultaneously within the CUDA architecture. Our top-down approach allows for quickly identifying any significant differences between the execution of the many blocks and warps. As interesting warps are identified, we allow further investigation of memory behavior by visualizing the shared memory bank conflicts and global memory coalescence, first with an overview of a single warp with many operations and, subsequently, with a detailed view of a single warp and a single operation. We demonstrate the strength of our approach in the context of a parallel matrix transpose kernel and a parallel 1D Haar Wavelet transform kernel. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  14. A Visual Approach to Investigating Shared and Global Memory Behavior of CUDA Kernels

    KAUST Repository

    Rosen, Paul

    2013-06-01

    We present an approach to investigate the memory behavior of a parallel kernel executing on thousands of threads simultaneously within the CUDA architecture. Our top-down approach allows for quickly identifying any significant differences between the execution of the many blocks and warps. As interesting warps are identified, we allow further investigation of memory behavior by visualizing the shared memory bank conflicts and global memory coalescence, first with an overview of a single warp with many operations and, subsequently, with a detailed view of a single warp and a single operation. We demonstrate the strength of our approach in the context of a parallel matrix transpose kernel and a parallel 1D Haar Wavelet transform kernel. © 2013 The Author(s) Computer Graphics Forum © 2013 The Eurographics Association and Blackwell Publishing Ltd.

  15. Option Valuation with Volatility Components, Fat Tails, and Non-Monotonic Pricing Kernels

    DEFF Research Database (Denmark)

    Babaoglu, Kadir; Christoffersen, Peter; Heston, Steven L.

    We nest multiple volatility components, fat tails and a U-shaped pricing kernel in a single option model and compare their contribution to describing returns and option data. All three features lead to statistically significant model improvements. A U-shaped pricing kernel is economically most im...

  16. Analysis of fast neutrons elastic moderator through exact solutions involving synthetic-kernels

    International Nuclear Information System (INIS)

    Moura Neto, C.; Chung, F.L.; Amorim, E.S.

    1979-07-01

    The computation difficulties in the transport equation solution applied to fast reactors can be reduced by the development of approximate models, assuming that the continuous moderation holds. Two approximations were studied. The first one was based on an expansion in Taylor's series (Fermi, Wigner, Greuling and Goertzel models), and the second involving the utilization of synthetic Kernels (Walti, Turinsky, Becker and Malaviya models). The flux obtained by the exact method is compared with the fluxes from the different models based on synthetic Kernels. It can be verified that the present study is realistic for energies smaller than the threshold for inelastic scattering, as well as in the resonance region. (Author) [pt

  17. Effective single scattering albedo estimation using regional climate model

    CSIR Research Space (South Africa)

    Tesfaye, M

    2011-09-01

    Full Text Available In this study, by modifying the optical parameterization of Regional Climate model (RegCM), the authors have computed and compared the Effective Single-Scattering Albedo (ESSA) which is a representative of VIS spectral region. The arid, semi...

  18. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Single-photon switch: Controllable scattering of photons inside a one-dimensional resonator waveguide

    Science.gov (United States)

    Zhou, L.; Gong, Z. R.; Liu, Y. X.; Sun, C. P.; Nori, F.

    2010-03-01

    We analyze the coherent transport of a single photon, which propagates in a one-dimensional coupled-resonator waveguide and is scattered by a controllable two-level system located inside one of the resonators of this waveguide. Our approach, which uses discrete coordinates, unifies low and high energy effective theories for single-photon scattering. We show that the controllable two-level system can behave as a quantum switch for the coherent transport of a single photon. This study may inspire new electro-optical single-photon quantum devices. We also suggest an experimental setup based on superconducting transmission line resonators and qubits. References: L. Zhou, Z.R. Gong, Y.X. Liu, C.P. Sun, F. Nori, Controllable scattering of photons inside a one-dimensional resonator waveguide, Phys. Rev. Lett. 101, 100501 (2008). L. Zhou, H. Dong, Y.X. Liu, C.P. Sun, F. Nori, Quantum super-cavity with atomic mirrors, Phys. Rev. A 78, 063827 (2008).

  20. Covariant meson-baryon scattering with chiral and large Nc constraints

    International Nuclear Information System (INIS)

    Lutz, M.F.M.; Kolomeitsev, E.E.

    2001-05-01

    We give a review of recent progress on the application of the relativistic chiral SU(3) Lagrangian to meson-baryon scattering. It is shown that a combined chiral and 1/N c expansion of the Bethe-Salpeter interaction kernel leads to a good description of the kaon-nucleon, antikaon-nucleon and pion-nucleon scattering data typically up to laboratory momenta of p lab ≅ 500 MeV. We solve the covariant coupled channel Bethe-Salpeter equation with the interaction kernel truncated to chiral order Q 3 where we include only those terms which are leading in the large N c limit of QCD. (orig.)

  1. Role of electron-electron scattering on spin transport in single layer graphene

    Directory of Open Access Journals (Sweden)

    Bahniman Ghosh

    2014-01-01

    Full Text Available In this work, the effect of electron-electron scattering on spin transport in single layer graphene is studied using semi-classical Monte Carlo simulation. The D’yakonov-P’erel mechanism is considered for spin relaxation. It is found that electron-electron scattering causes spin relaxation length to decrease by 35% at 300 K. The reason for this decrease in spin relaxation length is that the ensemble spin is modified upon an e-e collision and also e-e scattering rate is greater than phonon scattering rate at room temperature, which causes change in spin relaxation profile due to electron-electron scattering.

  2. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  3. Early Detection of Aspergillus parasiticus Infection in Maize Kernels Using Near-Infrared Hyperspectral Imaging and Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Xin Zhao

    2017-01-01

    Full Text Available Fungi infection in maize kernels is a major concern worldwide due to its toxic metabolites such as mycotoxins, thus it is necessary to develop appropriate techniques for early detection of fungi infection in maize kernels. Thirty-six sterilised maize kernels were inoculated each day with Aspergillus parasiticus from one to seven days, and then seven groups (D1, D2, D3, D4, D5, D6, D7 were determined based on the incubated time. Another 36 sterilised kernels without inoculation with fungi were taken as control (DC. Hyperspectral images of all kernels were acquired within spectral range of 921–2529 nm. Background, labels and bad pixels were removed using principal component analysis (PCA and masking. Separability computation for discrimination of fungal contamination levels indicated that the model based on the data of the germ region of individual kernels performed more effectively than on that of the whole kernels. Moreover, samples with a two-day interval were separable. Thus, four groups, DC, D1–2 (the group consisted of D1 and D2, D3–4 (D3 and D4, and D5–7 (D5, D6, and D7, were defined for subsequent classification. Two separate sample sets were prepared to verify the influence on a classification model caused by germ orientation, that is, germ up and the mixture of germ up and down with 1:1. Two smooth preprocessing methods (Savitzky-Golay smoothing, moving average smoothing and three scatter-correction methods (normalization, standard normal variate, and multiple scatter correction were compared, according to the performance of the classification model built by support vector machines (SVM. The best model for kernels with germ up showed the promising results with accuracies of 97.92% and 91.67% for calibration and validation data set, respectively, while accuracies of the best model for samples of the mixed kernels were 95.83% and 84.38%. Moreover, five wavelengths (1145, 1408, 1935, 2103, and 2383 nm were selected as the key

  4. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  5. Possibility of single biomolecule imaging with coherent amplification of weak scattering x-ray photons.

    Science.gov (United States)

    Shintake, Tsumoru

    2008-10-01

    The number of photons produced by coherent x-ray scattering from a single biomolecule is very small because of its extremely small elastic-scattering cross section and low damage threshold. Even with a high x-ray flux of 3 x 10;{12} photons per 100-nm -diameter spot and an ultrashort pulse of 10 fs driven by a future x-ray free electron laser (x-ray FEL), it has been predicted that only a few 100 photons will be produced from the scattering of a single lysozyme molecule. In observations of scattered x rays on a detector, the transfer of energy from wave to matter is accompanied by the quantization of the photon energy. Unfortunately, x rays have a high photon energy of 12 keV at wavelengths of 1A , which is required for atomic resolution imaging. Therefore, the number of photoionization events is small, which limits the resolution of imaging of a single biomolecule. In this paper, I propose a method: instead of directly observing the photons scattered from the sample, we amplify the scattered waves by superimposing an intense coherent reference pump wave on it and record the resulting interference pattern on a planar x-ray detector. Using a nanosized gold particle as a reference pump wave source, we can collect 10;{4}-10;{5} photons in single shot imaging where the signal from a single biomolecule is amplified and recorded as two-dimensional diffraction intensity data. An iterative phase retrieval technique can be used to recover the phase information and reconstruct the image of the single biomolecule and the gold particle at the same time. In order to precisely reconstruct a faint image of the single biomolecule in Angstrom resolution, whose intensity is much lower than that of the bright gold particle, I propose a technique that combines iterative phase retrieval on the reference pump wave and the digital Fourier transform holography on the sample. By using a large number of holography data, the three-dimensional electron density map can be assembled.

  6. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  7. Concentric layered Hermite scatterers

    Science.gov (United States)

    Astheimer, Jeffrey P.; Parker, Kevin J.

    2018-05-01

    The long wavelength limit of scattering from spheres has a rich history in optics, electromagnetics, and acoustics. Recently it was shown that a common integral kernel pertains to formulations of weak spherical scatterers in both acoustics and electromagnetic regimes. Furthermore, the relationship between backscattered amplitude and wavenumber k was shown to follow power laws higher than the Rayleigh scattering k2 power law, when the inhomogeneity had a material composition that conformed to a Gaussian weighted Hermite polynomial. Although this class of scatterers, called Hermite scatterers, are plausible, it may be simpler to manufacture scatterers with a core surrounded by one or more layers. In this case the inhomogeneous material property conforms to a piecewise continuous constant function. We demonstrate that the necessary and sufficient conditions for supra-Rayleigh scattering power laws in this case can be stated simply by considering moments of the inhomogeneous function and its spatial transform. This development opens an additional path for construction of, and use of scatterers with unique power law behavior.

  8. Neutron cross sections of cryogenic materials: a synthetic kernel for molecular solids

    International Nuclear Information System (INIS)

    Granada, J.R.; Gillette, V.H.; Petriw, S.; Cantargi, F.; Pepe, M.E.; Sbaffoni, M.M.

    2004-01-01

    A new synthetic scattering function aimed at the description of the interaction of thermal neutrons with molecular solids has been developed. At low incident neutron energies, both lattice modes and molecular rotations are specifically accounted for, through an expansion of the scattering law in few phonon terms. Simple representations of the molecular dynamical modes are used, in order to produce a fairly accurate description of neutron scattering kernels and cross sections with a minimum set of input data. As the neutron energies become much larger than that corresponding to the characteristic Debye temperature and to the rotational energies of the molecular solid, the 'phonon formulation' transforms into the traditional description for molecular gases. (orig.)

  9. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  10. Evaluation of scatter correction using a single isotope for simultaneous emission and transmission data

    International Nuclear Information System (INIS)

    Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E.; Kauppinen, T.; Patomaeki, L.

    1999-01-01

    Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p [de

  11. Integral equations with difference kernels on finite intervals

    CERN Document Server

    Sakhnovich, Lev A

    2015-01-01

    This book focuses on solving integral equations with difference kernels on finite intervals. The corresponding problem on the semiaxis was previously solved by N. Wiener–E. Hopf and by M.G. Krein. The problem on finite intervals, though significantly more difficult, may be solved using our method of operator identities. This method is also actively employed in inverse spectral problems, operator factorization and nonlinear integral equations. Applications of the obtained results to optimal synthesis, light scattering, diffraction, and hydrodynamics problems are discussed in this book, which also describes how the theory of operators with difference kernels is applied to stable processes and used to solve the famous M. Kac problems on stable processes. In this second edition these results are extensively generalized and include the case of all Levy processes. We present the convolution expression for the well-known Ito formula of the generator operator, a convolution expression that has proven to be fruitful...

  12. Estimates of the Spectral Aerosol Single Sea Scattering Albedo and Aerosol Radiative Effects during SAFARI 2000

    Science.gov (United States)

    Bergstrom, Robert W.; Pilewskie, Peter; Schmid, Beat; Russell, Philip B.

    2003-01-01

    Using measurements of the spectral solar radiative flux and optical depth for 2 days (24 August and 6 September 2000) during the SAFARI 2000 intensive field experiment and a detailed radiative transfer model, we estimate the spectral single scattering albedo of the aerosol layer. The single scattering albedo is similar on the 2 days even though the optical depth for the aerosol layer was quite different. The aerosol single scattering albedo was between 0.85 and 0.90 at 350 nm, decreasing to 0.6 in the near infrared. The magnitude and decrease with wavelength of the single scattering albedo are consistent with the absorption properties of small black carbon particles. We estimate the uncertainty in the single scattering albedo due to the uncertainty in the measured fractional absorption and optical depths. The uncertainty in the single scattering albedo is significantly less on the high-optical-depth day (6 September) than on the low-optical-depth day (24 August). On the high-optical-depth day, the uncertainty in the single scattering albedo is 0.02 in the midvisible whereas on the low-optical-depth day the uncertainty is 0.08 in the midvisible. On both days, the uncertainty becomes larger in the near infrared. We compute the radiative effect of the aerosol by comparing calculations with and without the aerosol. The effect at the top of the atmosphere (TOA) is to cool the atmosphere by 13 W/sq m on 24 August and 17 W/sq m on 6 September. The effect on the downward flux at the surface is a reduction of 57 W/sq m on 24 August and 200 W/sq m on 6 September. The aerosol effect on the downward flux at the surface is in good agreement with the results reported from the Indian Ocean Experiment (INDOEX).

  13. Resonance estimates for single spin asymmetries in elastic electron-nucleon scattering

    International Nuclear Information System (INIS)

    Barbara Pasquini; Marc Vanderhaeghen

    2004-01-01

    We discuss the target and beam normal spin asymmetries in elastic electron-nucleon scattering which depend on the imaginary part of two-photon exchange processes between electron and nucleon. We express this imaginary part as a phase space integral over the doubly virtual Compton scattering tensor on the nucleon. We use unitarity to model the doubly virtual Compton scattering tensor in the resonance region in terms of γ* N → π N electroabsorption amplitudes. Taking those amplitudes from a phenomenological analysis of pion electroproduction observables, we present results for beam and target normal single spin asymmetries for elastic electron-nucleon scattering for beam energies below 1 GeV and in the 1-3 GeV region, where several experiments are performed or are in progress

  14. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  15. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    International Nuclear Information System (INIS)

    Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G

    2008-01-01

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity

  16. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2008-04-21

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.

  17. Fermionic NNLO contributions to Bhabha scattering

    International Nuclear Information System (INIS)

    Actis, S.; Riemann, T.; Czakon, M.; Uniwersytet Slaski, Katowice; Gluza, J.

    2007-10-01

    We derive the two-loop corrections to Bhabha scattering from heavy fermions using dispersion relations. The double-box contributions are expressed by three kernel functions. Convoluting the perturbative kernels with fermionic threshold functions or with hadronic data allows to determine numerical results for small electron mass m e , combined with arbitrary values of the fermion mass m f in the loop, m 2 e 2 f , or with hadronic insertions. We present numerical results for m f =m μ , m τ ,m top at typical small- and large-angle kinematics ranging from 1 GeV to 500 GeV. (orig.)

  18. Single Crystal Diffuse Neutron Scattering

    Directory of Open Access Journals (Sweden)

    Richard Welberry

    2018-01-01

    Full Text Available Diffuse neutron scattering has become a valuable tool for investigating local structure in materials ranging from organic molecular crystals containing only light atoms to piezo-ceramics that frequently contain heavy elements. Although neutron sources will never be able to compete with X-rays in terms of the available flux the special properties of neutrons, viz. the ability to explore inelastic scattering events, the fact that scattering lengths do not vary systematically with atomic number and their ability to scatter from magnetic moments, provides strong motivation for developing neutron diffuse scattering methods. In this paper, we compare three different instruments that have been used by us to collect neutron diffuse scattering data. Two of these are on a spallation source and one on a reactor source.

  19. Single-kernel analysis of fumonisins and other fungal metabolites in maize from South African subsistence farmers

    DEFF Research Database (Denmark)

    Mogensen, Jesper Mølgaard; Sørensen, S.M.; Sulyok, M.

    2011-01-01

    Fumonisins are important Fusarium mycotoxins mainly found in maize and derived products. This study analysed maize from five subsistence farmers in the former Transkei region of South Africa. Farmers had sorted kernels into good and mouldy quality. A total of 400 kernels from 10 batches were...... analysed; of these 100 were visually characterised as uninfected and 300 as infected. Of the 400 kernels, 15% were contaminated with 1.84-1428 mg kg(-1) fumonisins, and 4% (n = 15) had a fumonisin content above 100 mg kg(-1). None of the visually uninfected maize had detectable amounts of fumonisins....... The total fumonisin concentration was 0.28-1.1 mg kg(-1) for good-quality batches and 0.03-6.2 mg kg(-1) for mouldy-quality batches. The high fumonisin content in the batches was apparently caused by a small number (4%) of highly contaminated kernels, and removal of these reduced the average fumonisin...

  20. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  1. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations

    International Nuclear Information System (INIS)

    Stark, Julian; Rothe, Thomas; Kienle, Alwin; Kieß, Steffen; Simon, Sven

    2016-01-01

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns. (paper)

  2. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations.

    Science.gov (United States)

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-07

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  3. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  4. Soft Sensing of Key State Variables in Fermentation Process Based on Relevance Vector Machine with Hybrid Kernel Function

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available To resolve the online detection difficulty of some important state variables in fermentation process with traditional instruments, a soft sensing modeling method based on relevance vector machine (RVM with a hybrid kernel function is presented. Based on the characteristic analysis of two commonly-used kernel functions, that is, local Gaussian kernel function and global polynomial kernel function, a hybrid kernel function combing merits of Gaussian kernel function and polynomial kernel function is constructed. To design optimal parameters of this kernel function, the particle swarm optimization (PSO algorithm is applied. The proposed modeling method is used to predict the value of cell concentration in the Lysine fermentation process. Simulation results show that the presented hybrid-kernel RVM model has a better accuracy and performance than the single kernel RVM model.

  5. Single- and coupled-channel radial inverse scattering with supersymmetric transformations

    International Nuclear Information System (INIS)

    Baye, Daniel; Sparenberg, Jean-Marc; Pupasov-Maksimov, Andrey M; Samsonov, Boris F

    2014-01-01

    The present status of the three-dimensional inverse-scattering method with supersymmetric transformations is reviewed for the coupled-channel case. We first revisit in a pedagogical way the single-channel case, where the supersymmetric approach is shown to provide a complete, efficient and elegant solution to the inverse-scattering problem for the radial Schrödinger equation with short-range interactions. A special emphasis is put on the differences between conservative and non-conservative transformations, i.e. transformations that do or do not conserve the behaviour of solutions of the radial Schrödinger equation at the origin. In particular, we show that for the zero initial potential, a non-conservative transformation is always equivalent to a pair of conservative transformations. These single-channel results are illustrated on the inversion of the neutron–proton triplet eigenphase shifts for the S- and D-waves. We then summarize and extend our previous works on the coupled-channel case, i.e. on systems of coupled radial Schrödinger equations, and stress remaining difficulties and open questions of this problem by putting it in perspective with the single-channel case. We mostly concentrate on two-channel examples to illustrate general principles while keeping mathematics as simple as possible. In particular, we discuss the important difference between the equal-threshold and different-threshold problems. For equal thresholds, conservative transformations can provide non-diagonal Jost and scattering matrices. Iterations of such transformations in the two-channel case are studied and shown to lead to practical algorithms for inversion. A convenient particular technique where the mixing parameter can be fitted without modifying the eigenphases is developed with iterations of pairs of conjugate transformations. This technique is applied to the neutron–proton triplet S–D scattering matrix, for which exactly-solvable matrix potential models are constructed

  6. On Higgs-exchange DIS, physical evolution kernels and fourth-order splitting functions at large x

    International Nuclear Information System (INIS)

    Soar, G.; Vogt, A.; Vermaseren, J.A.M.

    2009-12-01

    We present the coefficient functions for deep-inelastic scattering (DIS) via the exchange of a scalar φ directly coupling only to gluons, such as the Higgs boson in the limit of a very heavy top quark and n f effectively massless light flavours, to the third order in perturbative QCD. The two-loop results are employed to construct the next-to-next-to-leading order physical evolution kernels for the system (F 2 ,F φ ) of flavour-singlet structure functions. The practical relevance of these kernels as an alternative to MS factorization is bedevilled by artificial double logarithms at small values of the scaling variable x, where the large top-mass limit ceases to be appropriate. However, they show an only single-logarithmic enhancement at large x. Conjecturing that this feature persists to the next order also in the present singlet case, the three-loop coefficient functions facilitate exact predictions (backed up by their particular colour structure) of the double-logarithmic contributions to the fourth-order singlet splitting functions, i.e., of the terms (1-x) a ln k (1-x) with k=4,5,6 and k=3,4,5, respectively, for the off-diagonal and diagonal quantities to all powers a in (1-x). (orig.)

  7. Quantitative and Isolated Measurement of Far-Field Light Scattering by a Single Nanostructure

    Science.gov (United States)

    Kim, Donghyeong; Jeong, Kwang-Yong; Kim, Jinhyung; Ee, Ho-Seok; Kang, Ju-Hyung; Park, Hong-Gyu; Seo, Min-Kyo

    2017-11-01

    Light scattering by nanostructures has facilitated research on various optical phenomena and applications by interfacing the near fields and free-propagating radiation. However, direct quantitative measurement of far-field scattering by a single nanostructure on the wavelength scale or less is highly challenging. Conventional back-focal-plane imaging covers only a limited solid angle determined by the numerical aperture of the objectives and suffers from optical aberration and distortion. Here, we present a quantitative measurement of the differential far-field scattering cross section of a single nanostructure over the full hemisphere. In goniometer-based far-field scanning with a high signal-to-noise ratio of approximately 27.4 dB, weak scattering signals are efficiently isolated and detected under total-internal-reflection illumination. Systematic measurements reveal that the total and differential scattering cross sections of a Au nanorod are determined by the plasmonic Fabry-Perot resonances and the phase-matching conditions to the free-propagating radiation, respectively. We believe that our angle-resolved far-field measurement scheme provides a way to investigate and evaluate the physical properties and performance of nano-optical materials and phenomena.

  8. The Conserved and Unique Genetic Architecture of Kernel Size and Weight in Maize and Rice.

    Science.gov (United States)

    Liu, Jie; Huang, Juan; Guo, Huan; Lan, Liu; Wang, Hongze; Xu, Yuancheng; Yang, Xiaohong; Li, Wenqiang; Tong, Hao; Xiao, Yingjie; Pan, Qingchun; Qiao, Feng; Raihan, Mohammad Sharif; Liu, Haijun; Zhang, Xuehai; Yang, Ning; Wang, Xiaqing; Deng, Min; Jin, Minliang; Zhao, Lijun; Luo, Xin; Zhou, Yang; Li, Xiang; Zhan, Wei; Liu, Nannan; Wang, Hong; Chen, Gengshen; Li, Qing; Yan, Jianbing

    2017-10-01

    Maize ( Zea mays ) is a major staple crop. Maize kernel size and weight are important contributors to its yield. Here, we measured kernel length, kernel width, kernel thickness, hundred kernel weight, and kernel test weight in 10 recombinant inbred line populations and dissected their genetic architecture using three statistical models. In total, 729 quantitative trait loci (QTLs) were identified, many of which were identified in all three models, including 22 major QTLs that each can explain more than 10% of phenotypic variation. To provide candidate genes for these QTLs, we identified 30 maize genes that are orthologs of 18 rice ( Oryza sativa ) genes reported to affect rice seed size or weight. Interestingly, 24 of these 30 genes are located in the identified QTLs or within 1 Mb of the significant single-nucleotide polymorphisms. We further confirmed the effects of five genes on maize kernel size/weight in an independent association mapping panel with 540 lines by candidate gene association analysis. Lastly, the function of ZmINCW1 , a homolog of rice GRAIN INCOMPLETE FILLING1 that affects seed size and weight, was characterized in detail. ZmINCW1 is close to QTL peaks for kernel size/weight (less than 1 Mb) and contains significant single-nucleotide polymorphisms affecting kernel size/weight in the association panel. Overexpression of this gene can rescue the reduced weight of the Arabidopsis ( Arabidopsis thaliana ) homozygous mutant line in the AtcwINV2 gene (Arabidopsis ortholog of ZmINCW1 ). These results indicate that the molecular mechanisms affecting seed development are conserved in maize, rice, and possibly Arabidopsis. © 2017 American Society of Plant Biologists. All Rights Reserved.

  9. Formal solutions of inverse scattering problems. III

    International Nuclear Information System (INIS)

    Prosser, R.T.

    1980-01-01

    The formal solutions of certain three-dimensional inverse scattering problems presented in papers I and II of this series [J. Math. Phys. 10, 1819 (1969); 17 1175 (1976)] are obtained here as fixed points of a certain nonlinear mapping acting on a suitable Banach space of integral kernels. When the scattering data are sufficiently restricted, this mapping is shown to be a contraction, thereby establishing the existence, uniqueness, and continuous dependence on the data of these formal solutions

  10. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  11. Monte Carlo Modelling of Single-Crystal Diffuse Scattering from Intermetallics

    Directory of Open Access Journals (Sweden)

    Darren J. Goossens

    2016-02-01

    Full Text Available Single-crystal diffuse scattering (SCDS reveals detailed structural insights into materials. In particular, it is sensitive to two-body correlations, whereas traditional Bragg peak-based methods are sensitive to single-body correlations. This means that diffuse scattering is sensitive to ordering that persists for just a few unit cells: nanoscale order, sometimes referred to as “local structure”, which is often crucial for understanding a material and its function. Metals and alloys were early candidates for SCDS studies because of the availability of large single crystals. While great progress has been made in areas like ab initio modelling and molecular dynamics, a place remains for Monte Carlo modelling of model crystals because of its ability to model very large systems; important when correlations are relatively long (though still finite in range. This paper briefly outlines, and gives examples of, some Monte Carlo methods appropriate for the modelling of SCDS from metallic compounds, and considers data collection as well as analysis. Even if the interest in the material is driven primarily by magnetism or transport behaviour, an understanding of the local structure can underpin such studies and give an indication of nanoscale inhomogeneity.

  12. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  13. Neutron Scattering in Hydrogenous Moderators, Studied by Time Dependent Reaction Rate Method

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, L G; Moeller, E; Purohit, S N

    1966-03-15

    The moderation and absorption of a neutron burst in water, poisoned with the non-1/v absorbers cadmium and gadolinium, has been followed on the time scale by multigroup calculations, using scattering kernels for the proton gas and the Nelkin model. The time dependent reaction rate curves for each absorber display clear differences for the two models, and the separation between the curves does not depend much on the absorber concentration. An experimental method for the measurement of infinite medium reaction rate curves in a limited geometry has been investigated. This method makes the measurement of the time dependent reaction rate generally useful for thermalization studies in a small geometry of a liquid hydrogenous moderator, provided that the experiment is coupled to programs for the calculation of scattering kernels and time dependent neutron spectra. Good agreement has been found between the reaction rate curve, measured with cadmium in water, and a calculated curve, where the Haywood kernel has been used.

  14. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    Science.gov (United States)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  15. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Directory of Open Access Journals (Sweden)

    Chunmei Liu

    2016-01-01

    Full Text Available This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour.

  16. Adaptive Shape Kernel-Based Mean Shift Tracker in Robot Vision System

    Science.gov (United States)

    2016-01-01

    This paper proposes an adaptive shape kernel-based mean shift tracker using a single static camera for the robot vision system. The question that we address in this paper is how to construct such a kernel shape that is adaptive to the object shape. We perform nonlinear manifold learning technique to obtain the low-dimensional shape space which is trained by training data with the same view as the tracking video. The proposed kernel searches the shape in the low-dimensional shape space obtained by nonlinear manifold learning technique and constructs the adaptive kernel shape in the high-dimensional shape space. It can improve mean shift tracker performance to track object position and object contour and avoid the background clutter. In the experimental part, we take the walking human as example to validate that our method is accurate and robust to track human position and describe human contour. PMID:27379165

  17. Transverse momentum in double parton scattering. Factorisation, evolution and matching

    Energy Technology Data Exchange (ETDEWEB)

    Buffing, Maarten G.A.; Diehl, Markus [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Kasemets, Tomas [Nikhef, Amsterdam (Netherlands). Theory Group; VU Univ. Amsterdam (Netherlands)

    2017-08-15

    We give a description of double parton scattering with measured transverse momenta in the final state, extending the formalism for factorisation and resummation developed by Collins, Soper and Sterman for the production of colourless particles. After a detailed analysis of their colour structure, we derive and solve evolution equations in rapidity and renormalisation scale for the relevant soft factors and double parton distributions. We show how in the perturbative regime, transverse momentum dependent double parton distributions can be expressed in terms of simpler nonperturbative quantities and compute several of the corresponding perturbative kernels at one-loop accuracy. We then show how the coherent sum of single and double parton scattering can be simplified for perturbatively large transverse momenta, and we discuss to which order resummation can be performed with presently available results. As an auxiliary result, we derive a simple form for the square root factor in the Collins construction of transverse momentum dependent parton distributions.

  18. Transverse momentum in double parton scattering. Factorisation, evolution and matching

    International Nuclear Information System (INIS)

    Buffing, Maarten G.A.; Diehl, Markus; Kasemets, Tomas

    2017-08-01

    We give a description of double parton scattering with measured transverse momenta in the final state, extending the formalism for factorisation and resummation developed by Collins, Soper and Sterman for the production of colourless particles. After a detailed analysis of their colour structure, we derive and solve evolution equations in rapidity and renormalisation scale for the relevant soft factors and double parton distributions. We show how in the perturbative regime, transverse momentum dependent double parton distributions can be expressed in terms of simpler nonperturbative quantities and compute several of the corresponding perturbative kernels at one-loop accuracy. We then show how the coherent sum of single and double parton scattering can be simplified for perturbatively large transverse momenta, and we discuss to which order resummation can be performed with presently available results. As an auxiliary result, we derive a simple form for the square root factor in the Collins construction of transverse momentum dependent parton distributions.

  19. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  20. First experimental observation of double-photon Compton scattering using single gamma detector

    International Nuclear Information System (INIS)

    Sandhu, B.S.; Saddi, M.B.; Singh, B.; Ghumman, B.S.

    2003-01-01

    Full text: The phenomenon of double-photon Compton scattering has been successfully observed using single gamma detector, a technique avoiding the use of complicated slow-fast coincidence set-up used till now for observing this higher order process. Here doubly differentiated collision cross-section integrated over direction of one of the two final photons, the direction of other one being kept fixed, has been measured experimentally for 0.662 MeV incident gamma photons. The energy spectra of the detected photons are observed as a long tail to the single-photon Compton line on the lower side of the full energy peak in the recorded scattered energy spectrum. The present results are in agreement with theory of this process

  1. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  2. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  3. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  4. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  5. Visibility Restoration for Single Hazy Image Using Dual Prior Knowledge

    Directory of Open Access Journals (Sweden)

    Mingye Ju

    2017-01-01

    Full Text Available Single image haze removal has been a challenging task due to its super ill-posed nature. In this paper, we propose a novel single image algorithm that improves the detail and color of such degraded images. More concretely, we redefine a more reliable atmospheric scattering model (ASM based on our previous work and the atmospheric point spread function (APSF. Further, by taking the haze density spatial feature into consideration, we design a scene-wise APSF kernel prediction mechanism to eliminate the multiple-scattering effect. With the redefined ASM and designed APSF, combined with the existing prior knowledge, the complex dehazing problem can be subtly converted into one-dimensional searching problem, which allows us to directly obtain the scene transmission and thereby recover visually realistic results via the proposed ASM. Experimental results verify that our algorithm outperforms several state-of-the-art dehazing techniques in terms of robustness, effectiveness, and efficiency.

  6. A new treatment of nonlocality in scattering process

    Science.gov (United States)

    Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.

    2018-01-01

    Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.

  7. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  8. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    Science.gov (United States)

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  9. Irreducible kernels and nonperturbative expansions in a theory with pure m -> m interaction

    International Nuclear Information System (INIS)

    Iagolnitzer, D.

    1983-01-01

    Recent results on the structure of the S matrix at the m-particle threshold (m>=2) in a simplified m->m scattering theory with no subchannel interaction are extended to the Green function F on the basis of off-shell unitarity, through an adequate mathematical extension of some results of Fredholm theory: local two-sheeted or infinite-sheeted structure of F around s=(mμ) 2 depending on the parity of (m-1) (ν-1) (where μ>0 is the mass and ν is the dimension of space-time), off-shell definition of the irreducible kernel U [which is the analogue of the K matrix in the two different parity cases (m-1)(ν-1) odd or even] and related local expansion of F, for (m-1)(ν-1) even, in powers of sigmasup(β)lnsigma(sigma=(mμ) 2 -s). It is shown that each term in this expansion is the dominant contribution to a Feynman-type integral in which each vertex is a kernel U. The links between kernel U and Bethe-Salpeter type kernels G of the theory are exhibited in both parity cases, as also the links between the above expansion of F and local expansions, in the Bethe-Salpeter type framework, of Fsub(lambda) in terms of Feynman-type integrals in which each vertex is a kernel G and which include both dominant and subdominant contributions. (orig.)

  10. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  11. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  12. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  13. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  14. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  15. Imaging through scattering media by Fourier filtering and single-pixel detection

    Science.gov (United States)

    Jauregui-Sánchez, Y.; Clemente, P.; Lancis, J.; Tajahuerce, E.

    2018-02-01

    We present a novel imaging system that combines the principles of Fourier spatial filtering and single-pixel imaging in order to recover images of an object hidden behind a turbid medium by transillumination. We compare the performance of our single-pixel imaging setup with that of a conventional system. We conclude that the introduction of Fourier gating improves the contrast of images in both cases. Furthermore, we show that the combination of single-pixel imaging and Fourier spatial filtering techniques is particularly well adapted to provide images of objects transmitted through scattering media.

  16. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  17. Software correction of scatter coincidence in positron CT

    International Nuclear Information System (INIS)

    Endo, M.; Iinuma, T.A.

    1984-01-01

    This paper describes a software correction of scatter coincidence in positron CT which is based on an estimation of scatter projections from true projections by an integral transform. Kernels for the integral transform are projected distributions of scatter coincidences for a line source at different positions in a water phantom and are calculated by Klein-Nishina's formula. True projections of any composite object can be determined from measured projections by iterative applications of the integral transform. The correction method was tested in computer simulations and phantom experiments with Positologica. The results showed that effects of scatter coincidence are not negligible in the quantitation of images, but the correction reduces them significantly. (orig.)

  18. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  19. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    Hardware accelerators are becoming ubiquitous high performance scientific computing. They are capable of delivering an unprecedented level of concurrent execution contexts. High-level programming language extensions (e.g., CUDA), profiling tools (e.g., PAPI-CUDA, CUDA Profiler) are paramount to improve productivity, while effectively exploiting the underlying hardware. We present an optimized numerical kernel for computing the symmetric matrix-vector product on nVidia Fermi GPUs. Due to its inherent memory-bound nature, this kernel is very critical in the tridiagonalization of a symmetric dense matrix, which is a preprocessing step to calculate the eigenpairs. Using a novel design to address the irregular memory accesses by hiding latency and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double precision arithmetics, respectively. © 2013 Springer-Verlag.

  20. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  1. Thermal-neutron multiple scattering: critical double scattering

    International Nuclear Information System (INIS)

    Holm, W.A.

    1976-01-01

    A quantum mechanical formulation for multiple scattering of thermal-neutrons from macroscopic targets is presented and applied to single and double scattering. Critical nuclear scattering from liquids and critical magnetic scattering from ferromagnets are treated in detail in the quasielastic approximation for target systems slightly above their critical points. Numerical estimates are made of the double scattering contribution to the critical magnetic cross section using relevant parameters from actual experiments performed on various ferromagnets. The effect is to alter the usual Lorentzian line shape dependence on neutron wave vector transfer. Comparison with corresponding deviations in line shape resulting from the use of Fisher's modified form of the Ornstein-Zernike spin correlations within the framework of single scattering theory leads to values for the critical exponent eta of the modified correlations which reproduce the effect of double scattering. In addition, it is shown that by restricting the range of applicability of the multiple scattering theory from the outset to critical scattering, Glauber's high energy approximation can be used to provide a much simpler and more powerful description of multiple scattering effects. When sufficiently close to the critical point, it provides a closed form expression for the differential cross section which includes all orders of scattering and has the same form as the single scattering cross section with a modified exponent for the wave vector transfer

  2. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  3. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  4. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  5. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  6. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  7. Effect of diffraction on stimulated Brillouin scattering from a single laser hot spot

    International Nuclear Information System (INIS)

    Eliseev, V.V.; Rozmus, W.; Tikhonchuk, V.T.; Capjack, C.E.

    1996-01-01

    A single laser hot spot in an underdense plasma is represented as a focused Gaussian laser beam. Stimulated Brillouin scattering (SBS) from such a Gaussian beam with small f/numbers 2-4 has been studied in a three-dimensional slab geometry. It is shown that the SBS reflectivity from a single laser hot spot is much lower than that predicted by a simple three wave coupling model because of the diffraction of the scattered light from the spatially localized ion acoustic wave. SBS gain per one Rayleigh length of the incident laser beam is proposed as a quantitative measure of this effect. Diffraction-limited SBS from a randomized laser beam is also discussed. copyright 1996 American Institute of Physics

  8. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  9. How to simplify transmission-based scatter correction for clinical application

    International Nuclear Information System (INIS)

    Baccarne, V.; Hutton, B.F.

    1998-01-01

    Full text: The performances of ordered subsets (OS) EM reconstruction including attenuation, scatter and spatial resolution correction are evaluated using cardiac Monte Carlo data. We demonstrate how simplifications in the scatter model allow one to correct SPECT data for scatter in terms of quantitation and quality in a reasonable time. Initial reconstruction of the 20% window is performed including attenuation correction (broad beam μ values), to estimate the activity quantitatively (accuracy 3%), but not spatially. A rough reconstruction with 2 iterations (subset size: 8) is sufficient for subsequent scatter correction. Estimation of primary photons is obtained by projecting the previous distribution including attenuation (narrow beam μ values). Estimation of the scatter is obtained by convolving the primary estimates by a depth dependent scatter kernel, and scaling the result by a factor calculated from the attenuation map. The correction can be accelerated by convolving several adjacent planes with the same kernel, and using an average scaling factor. Simulation of the effects of the collimator during the scatter correction was demonstrated to be unnecessary. Final reconstruction is performed using 6 iterations OSEM, including attenuation (narrow beam μ values) and spatial resolution correction. Scatter correction is implemented by incorporating the estimated scatter as a constant offset in the forward projection step. The total correction + reconstruction (64 proj. 40x128 pixel) takes 38 minutes on a Sun Sparc 20. Quantitatively, the accuracy is 7% in a reconstructed slice. The SNR inside the whole myocardium (defined from the original object), is equal to 2.1 and 2.3 - in the corrected and the primary slices respectively. The scatter correction preserves the myocardium to ventricle contrast (primary: 0.79, corrected: 0.82). These simplifications allow acceleration of correction without influencing the quality of the result

  10. Optimizing memory-bound SYMV kernel on GPU hardware accelerators

    KAUST Repository

    Abdelfattah, Ahmad; Dongarra, Jack; Keyes, David E.; Ltaief, Hatem

    2013-01-01

    and increasing bandwidth, our preliminary asymptotic results show 3.5x and 2.5x fold speedups over the similar CUBLAS 4.0 kernel, and 7-8% and 30% fold improvement over the Matrix Algebra on GPU and Multicore Architectures (MAGMA) library in single and double

  11. Neuronal model with distributed delay: analysis and simulation study for gamma distribution memory kernel.

    Science.gov (United States)

    Karmeshu; Gupta, Varun; Kadambari, K V

    2011-06-01

    A single neuronal model incorporating distributed delay (memory)is proposed. The stochastic model has been formulated as a Stochastic Integro-Differential Equation (SIDE) which results in the underlying process being non-Markovian. A detailed analysis of the model when the distributed delay kernel has exponential form (weak delay) has been carried out. The selection of exponential kernel has enabled the transformation of the non-Markovian model to a Markovian model in an extended state space. For the study of First Passage Time (FPT) with exponential delay kernel, the model has been transformed to a system of coupled Stochastic Differential Equations (SDEs) in two-dimensional state space. Simulation studies of the SDEs provide insight into the effect of weak delay kernel on the Inter-Spike Interval(ISI) distribution. A measure based on Jensen-Shannon divergence is proposed which can be used to make a choice between two competing models viz. distributed delay model vis-á-vis LIF model. An interesting feature of the model is that the behavior of (CV(t))((ISI)) (Coefficient of Variation) of the ISI distribution with respect to memory kernel time constant parameter η reveals that neuron can switch from a bursting state to non-bursting state as the noise intensity parameter changes. The membrane potential exhibits decaying auto-correlation structure with or without damped oscillatory behavior depending on the choice of parameters. This behavior is in agreement with empirically observed pattern of spike count in a fixed time window. The power spectral density derived from the auto-correlation function is found to exhibit single and double peaks. The model is also examined for the case of strong delay with memory kernel having the form of Gamma distribution. In contrast to fast decay of damped oscillations of the ISI distribution for the model with weak delay kernel, the decay of damped oscillations is found to be slower for the model with strong delay kernel.

  12. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  13. Composite population kernels in ytterbium-buffer collisions studied by means of laser-saturated absorption

    International Nuclear Information System (INIS)

    Zhu, X.

    1986-01-01

    We present a systematic study of composite population kernels for 174 Yb collisions with He, Ar, and Xe buffer gases, using laser-saturation spectroscopy. 174 Yb is chosen as the active species because of the simple structure of its 1 S 0 - 3 P 1 resonance transition (lambda = 556 nm). Elastic collisions are modeled by means of a composite collision kernel, an expression of which is explicitly derived based on arguments of a hard-sphere potential and two-category collisions. The corresponding coupled population-rate equations are solved by iteration to obtain an expression for the saturated-absorption line shape. This expression is fit to the data to obtain information about the composite kernel, along with reasonable values for other parameters. The results confirm that a composite kernel is more general and realistic than a single-component kernel, and the generality in principle and the practical necessity of the former are discussed

  14. Application of geometric algebra to electromagnetic scattering the Clifford-Cauchy-Dirac technique

    CERN Document Server

    Seagar, Andrew

    2016-01-01

    This work presents the Clifford-Cauchy-Dirac (CCD) technique for solving problems involving the scattering of electromagnetic radiation from materials of all kinds. It allows anyone who is interested to master techniques that lead to simpler and more efficient solutions to problems of electromagnetic scattering than are currently in use. The technique is formulated in terms of the Cauchy kernel, single integrals, Clifford algebra and a whole-field approach. This is in contrast to many conventional techniques that are formulated in terms of Green's functions, double integrals, vector calculus and the combined field integral equation (CFIE). Whereas these conventional techniques lead to an implementation using the method of moments (MoM), the CCD technique is implemented as alternating projections onto convex sets in a Banach space. The ultimate outcome is an integral formulation that lends itself to a more direct and efficient solution than conventionally is the case, and applies without exception to all types...

  15. Recent advances and open questions in neutrino-induced quasi-elastic scattering and single photon production

    Energy Technology Data Exchange (ETDEWEB)

    Garvey, G.T., E-mail: garvey@lanl.gov [Los Alamos National Laboratory, P.O. Box 1663, Los Alamos, NM 87545 (United States); Harris, D.A., E-mail: dharris@fnal.gov [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL, 60510-5011 (United States); Tanaka, H.A., E-mail: tanaka@phas.ubc.ca [Institute of Particle Physics and Department of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC V6T 1Z1 (Canada); Tayloe, R., E-mail: rtayloe@indiana.edu [Department of Physics, Indiana University, 727 E. Third St., Bloomington, IN 47405-7105 (United States); Zeller, G.P., E-mail: gzeller@fnal.gov [Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL, 60510-5011 (United States)

    2015-06-15

    The study of neutrino–nucleus interactions has recently seen rapid development with a new generation of accelerator-based neutrino experiments employing medium and heavy nuclear targets for the study of neutrino oscillations. A few unexpected results in the study of quasi-elastic scattering and single photon production have spurred a revisiting of the underlying nuclear physics and connections to electron–nucleus scattering. A thorough understanding and resolution of these issues is essential for future progress in the study of neutrino oscillations. A recent workshop hosted by the Institute of Nuclear Theory at the University of Washington (INT-13-54W) examined experimental and theoretical developments in neutrino–nucleus interactions and related measurements from electron and pion scattering. We summarize the discussions at the workshop pertaining to the aforementioned issues in quasi-elastic scattering and single photon production, particularly where there was consensus on the highest priority issues to be resolved and the path towards resolving them.

  16. Scaling limit of deeply virtual Compton scattering

    Energy Technology Data Exchange (ETDEWEB)

    A. Radyushkin

    2000-07-01

    The author outlines a perturbative QCD approach to the analysis of the deeply virtual Compton scattering process {gamma}{sup *}p {r_arrow} {gamma}p{prime} in the limit of vanishing momentum transfer t=(p{prime}{minus}p){sup 2}. The DVCS amplitude in this limit exhibits a scaling behavior described by a two-argument distributions F(x,y) which specify the fractions of the initial momentum p and the momentum transfer r {equivalent_to} p{prime}{minus}p carried by the constituents of the nucleon. The kernel R(x,y;{xi},{eta}) governing the evolution of the non-forward distributions F(x,y) has a remarkable property: it produces the GLAPD evolution kernel P(x/{xi}) when integrated over y and reduces to the Brodsky-Lepage evolution kernel V(y,{eta}) after the x-integration. This property is used to construct the solution of the one-loop evolution equation for the flavor non-singlet part of the non-forward quark distribution.

  17. Sizing of single evaporating droplet with Near-Forward Elastic Scattering Spectroscopy

    Science.gov (United States)

    Woźniak, M.; Jakubczyk, D.; Derkachov, G.; Archer, J.

    2017-11-01

    We have developed an optical setup and related numerical models to study evolution of single evaporating micro-droplets by analysis of their spectral properties. Our approach combines the advantages of the electrodynamic trapping with the broadband spectral analysis with the supercontinuum laser illumination. The elastically scattered light within the spectral range of 500-900 nm is observed by a spectrometer placed at the near-forward scattering angles between 4.3 ° and 16.2 ° and compared with the numerically generated lookup table of the broadband Mie scattering. Our solution has been successfully applied to infer the size evolution of the evaporating droplets of pure liquids (diethylene and ethylene glycol) and suspensions of nanoparticles (silica and gold nanoparticles in diethylene glycol), with maximal accuracy of ± 25 nm. The obtained results have been compared with the previously developed sizing techniques: (i) based on the analysis of the Mie scattering images - the Mie Scattering Lookup Table Method and (ii) the droplet weighting. Our approach provides possibility to handle levitating objects with much larger size range (radius from 0.5 μm to 30 μm) than with the use of optical tweezers (typically radius below 8 μm) and analyse them with much wider spectral range than with commonly used LED sources.

  18. Polarized Raman scattering study of PSN single crystals and epitaxial thin films

    Directory of Open Access Journals (Sweden)

    J. Pokorný

    2015-06-01

    Full Text Available This paper describes a detailed analysis of the dependence of Raman scattering intensity on the polarization of the incident and inelastically scattered light in PbSc0.5Nb0.5O3 (PSN single crystals and epitaxially compressed thin films grown on (100-oriented MgO substrates. It is found that there are significant differences between the properties of the crystals and films, and that these differences can be attributed to the anticipated structural differences between these two forms of the same material. In particular, the scattering characteristics of the oxygen octahedra breathing mode near 810 cm-1 indicate a ferroelectric state for the crystals and a relaxor state for the films, which is consistent with the dielectric behaviors of these materials.

  19. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed

    2018-03-27

    Algorithmic and architecture-oriented optimizations are essential for achieving performance worthy of anticipated energy-austere exascale systems. In this paper, we present an extreme scale FMM-accelerated boundary integral equation solver for wave scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory, targeting emerging Intel extreme performance HPC architectures. We extract the potential thread- and data-level parallelism of the key Helmholtz kernels of FMM. Our application code is well optimized to exploit the AVX-512 SIMD units of Intel Skylake and Knights Landing architectures. We provide different performance models for tuning the task-based tree traversal implementation of FMM, and develop optimal architecture-specific and algorithm aware partitioning, load balancing, and communication reducing mechanisms to scale up to 6,144 compute nodes of a Cray XC40 with 196,608 hardware cores. With shared memory optimizations, we achieve roughly 77% of peak single precision floating point performance of a 56-core Skylake processor, and on average 60% of peak single precision floating point performance of a 72-core KNL. These numbers represent nearly 5.4x and 10x speedup on Skylake and KNL, respectively, compared to the baseline scalar code. With distributed memory optimizations, on the other hand, we report near-optimal efficiency in the weak scalability study with respect to both the logarithmic communication complexity as well as the theoretical scaling complexity of FMM. In addition, we exhibit up to 85% efficiency in strong scaling. We compute in excess of 2 billion DoF on the full-scale of the Cray XC40 supercomputer.

  20. A Fast Multiple-Kernel Method With Applications to Detect Gene-Environment Interaction.

    Science.gov (United States)

    Marceau, Rachel; Lu, Wenbin; Holloway, Shannon; Sale, Michèle M; Worrall, Bradford B; Williams, Stephen R; Hsu, Fang-Chi; Tzeng, Jung-Ying

    2015-09-01

    Kernel machine (KM) models are a powerful tool for exploring associations between sets of genetic variants and complex traits. Although most KM methods use a single kernel function to assess the marginal effect of a variable set, KM analyses involving multiple kernels have become increasingly popular. Multikernel analysis allows researchers to study more complex problems, such as assessing gene-gene or gene-environment interactions, incorporating variance-component based methods for population substructure into rare-variant association testing, and assessing the conditional effects of a variable set adjusting for other variable sets. The KM framework is robust, powerful, and provides efficient dimension reduction for multifactor analyses, but requires the estimation of high dimensional nuisance parameters. Traditional estimation techniques, including regularization and the "expectation-maximization (EM)" algorithm, have a large computational cost and are not scalable to large sample sizes needed for rare variant analysis. Therefore, under the context of gene-environment interaction, we propose a computationally efficient and statistically rigorous "fastKM" algorithm for multikernel analysis that is based on a low-rank approximation to the nuisance effect kernel matrices. Our algorithm is applicable to various trait types (e.g., continuous, binary, and survival traits) and can be implemented using any existing single-kernel analysis software. Through extensive simulation studies, we show that our algorithm has similar performance to an EM-based KM approach for quantitative traits while running much faster. We also apply our method to the Vitamin Intervention for Stroke Prevention (VISP) clinical trial, examining gene-by-vitamin effects on recurrent stroke risk and gene-by-age effects on change in homocysteine level. © 2015 WILEY PERIODICALS, INC.

  1. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  2. Characterisation and final disposal behaviour of theoria-based fuel kernels in aqueous phases

    International Nuclear Information System (INIS)

    Titov, M.

    2005-08-01

    Two high-temperature reactors (AVR and THTR) operated in Germany have produced about 1 million spent fuel elements. The nuclear fuel in these reactors consists mainly of thorium-uranium mixed oxides, but also pure uranium dioxide and carbide fuels were tested. One of the possible solutions of utilising spent HTR fuel is the direct disposal in deep geological formations. Under such circumstances, the properties of fuel kernels, and especially their leaching behaviour in aqueous phases, have to be investigated for safety assessments of the final repository. In the present work, unirradiated ThO 2 , (Th 0.906 ,U 0.094 )O 2 , (Th 0.834 ,U 0.166 )O 2 and UO 2 fuel kernels were investigated. The composition, crystal structure and surface of the kernels were investigated by traditional methods. Furthermore, a new method was developed for testing the mechanical properties of ceramic kernels. The method was successfully used for the examination of mechanical properties of oxide kernels and for monitoring their evolution during contact with aqueous phases. The leaching behaviour of thoria-based oxide kernels and powders was investigated in repository-relevant salt solutions, as well as in artificial leachates. The influence of different experimental parameters on the kernel leaching stability was investigated. It was shown that thoria-based fuel kernels possess high chemical stability and are indifferent to presence of oxidative and radiolytic species in solution. The dissolution rate of thoria-based materials is typically several orders of magnitude lower than of conventional UO 2 fuel kernels. The life time of a single intact (Th,U)O 2 kernel under aggressive conditions of salt repository was estimated as about hundred thousand years. The importance of grain boundary quality on the leaching stability was demonstrated. Numerical Monte Carlo simulations were performed in order to explain the results of leaching experiments. (orig.)

  3. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  4. Influence of orientation averaging on the anisotropy of thermal neutrons scattering on water molecules

    International Nuclear Information System (INIS)

    Markovic, M. I.; Radunovic, J. B.

    1976-01-01

    Determination of spatial distribution of neutron flux in water, most frequently used moderator in thermal reactors, demands microscopic scattering kernels dependence on cosine of thermal neutrons scattering angle when solving the Boltzmann equation. Since spatial orientation of water molecules influences this dependence it is necessary to perform orientation averaging or rotation-vibrational intermediate scattering function for water molecules. The calculations described in this paper and the obtained results showed that methods of orientation averaging do not influence the anisotropy of thermal neutrons scattering on water molecules, but do influence the inelastic scattering

  5. Wave functions, evolution equations and evolution kernels form light-ray operators of QCD

    International Nuclear Information System (INIS)

    Mueller, D.; Robaschik, D.; Geyer, B.; Dittes, F.M.; Horejsi, J.

    1994-01-01

    The widely used nonperturbative wave functions and distribution functions of QCD are determined as matrix elements of light-ray operators. These operators appear as large momentum limit of non-local hardron operators or as summed up local operators in light-cone expansions. Nonforward one-particle matrix elements of such operators lead to new distribution amplitudes describing both hadrons simultaneously. These distribution functions depend besides other variables on two scaling variables. They are applied for the description of exclusive virtual Compton scattering in the Bjorken region near forward direction and the two meson production process. The evolution equations for these distribution amplitudes are derived on the basis of the renormalization group equation of the considered operators. This includes that also the evolution kernels follow from the anomalous dimensions of these operators. Relations between different evolution kernels (especially the Altarelli-Parisi and the Brodsky-Lepage kernels) are derived and explicitly checked for the existing two-loop calculations of QCD. Technical basis of these resluts are support and analytically properties of the anomalous dimensions of light-ray operators obtained with the help of the α-representation of Green's functions. (orig.)

  6. Time-domain single-source integral equations for analyzing scattering from homogeneous penetrable objects

    KAUST Repository

    Valdé s, Felipe; Andriulli, Francesco P.; Bagci, Hakan; Michielssen, Eric

    2013-01-01

    Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis

  7. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  8. Photon beam convolution using polyenergetic energy deposition kernels

    International Nuclear Information System (INIS)

    Hoban, P.W.; Murray, D.C.; Round, W.H.

    1994-01-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, μ, to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio μ ab /μ as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author)

  9. Ultrasound scatter in heterogeneous 3D microstructures: Parameters affecting multiple scattering

    Science.gov (United States)

    Engle, B. J.; Roberts, R. A.; Grandin, R. J.

    2018-04-01

    This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures. The scattering problem is formulated as a volume integral equation (VIE) displaying a convolutional Green-function-derived kernel. The VIE is solved iteratively employing FFT-based con-volution. Realizations of random microstructures are specified on the micron scale using statistical property descriptions (e.g. grain size and orientation distributions), which are then spatially filtered to provide rigorously equivalent scattering media on a length scale relevant to ultrasound propagation. Scattering responses from ensembles of media representations are averaged to obtain mean and variance of quantities such as attenuation and backscatter noise levels, as a function of microstructure descriptors. The computational approach will be summarized, and examples of application will be presented.

  10. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  11. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  12. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  13. Absolute determination of zero-energy phase shifts for multiparticle single-channel scattering: Generalized Levinson theorem

    International Nuclear Information System (INIS)

    Rosenberg, L.; Spruch, L.

    1996-01-01

    Levinson close-quote s theorem relates the zero-energy phase shift δ for potential scattering in a given partial wave l, by a spherically symmetric potential that falls off sufficiently rapidly, to the number of bound states of that l supported by the potential. An extension of this theorem is presented that applies to single-channel scattering by a compound system initially in its ground state. As suggested by Swan [Proc. R. Soc. London Ser. A 228, 10 (1955)], the extended theorem differs from that derived for potential scattering; even in the absence of composite bound states δ may differ from zero as a consequence of the Pauli principle. The derivation given here is based on the introduction of a continuous auxiliary open-quote open-quote length phase close-quote close-quote η, defined modulo π for l=0 by expressing the scattering length as A=acotη, where a is a characteristic length of the target. Application of the minimum principle for the scattering length determines the branch of the cotangent curve on which η lies and, by relating η to δ, an absolute determination of δ is made. The theorem is applicable, in principle, to single-channel scattering in any partial wave for e ± -atom and nucleon-nucleus systems. In addition to a knowledge of the number of composite bound states, information (which can be rather incomplete) concerning the structure of the target ground-state wave function is required for an explicit, absolute, determination of the phase shift δ. As for Levinson close-quote s original theorem for potential scattering, no additional information concerning the scattering wave function or scattering dynamics is required. copyright 1996 The American Physical Society

  14. Ultrafast convolution/superposition using tabulated and exponential kernels on GPU

    Energy Technology Data Exchange (ETDEWEB)

    Chen Quan; Chen Mingli; Lu Weiguo [TomoTherapy Inc., 1240 Deming Way, Madison, Wisconsin 53717 (United States)

    2011-03-15

    Purpose: Collapsed-cone convolution/superposition (CCCS) dose calculation is the workhorse for IMRT dose calculation. The authors present a novel algorithm for computing CCCS dose on the modern graphic processing unit (GPU). Methods: The GPU algorithm includes a novel TERMA calculation that has no write-conflicts and has linear computation complexity. The CCCS algorithm uses either tabulated or exponential cumulative-cumulative kernels (CCKs) as reported in literature. The authors have demonstrated that the use of exponential kernels can reduce the computation complexity by order of a dimension and achieve excellent accuracy. Special attentions are paid to the unique architecture of GPU, especially the memory accessing pattern, which increases performance by more than tenfold. Results: As a result, the tabulated kernel implementation in GPU is two to three times faster than other GPU implementations reported in literature. The implementation of CCCS showed significant speedup on GPU over single core CPU. On tabulated CCK, speedups as high as 70 are observed; on exponential CCK, speedups as high as 90 are observed. Conclusions: Overall, the GPU algorithm using exponential CCK is 1000-3000 times faster over a highly optimized single-threaded CPU implementation using tabulated CCK, while the dose differences are within 0.5% and 0.5 mm. This ultrafast CCCS algorithm will allow many time-sensitive applications to use accurate dose calculation.

  15. Optimizing Multiple Kernel Learning for the Classification of UAV Data

    Directory of Open Access Journals (Sweden)

    Caroline M. Gevaert

    2016-12-01

    Full Text Available Unmanned Aerial Vehicles (UAVs are capable of providing high-quality orthoimagery and 3D information in the form of point clouds at a relatively low cost. Their increasing popularity stresses the necessity of understanding which algorithms are especially suited for processing the data obtained from UAVs. The features that are extracted from the point cloud and imagery have different statistical characteristics and can be considered as heterogeneous, which motivates the use of Multiple Kernel Learning (MKL for classification problems. In this paper, we illustrate the utility of applying MKL for the classification of heterogeneous features obtained from UAV data through a case study of an informal settlement in Kigali, Rwanda. Results indicate that MKL can achieve a classification accuracy of 90.6%, a 5.2% increase over a standard single-kernel Support Vector Machine (SVM. A comparison of seven MKL methods indicates that linearly-weighted kernel combinations based on simple heuristics are competitive with respect to computationally-complex, non-linear kernel combination methods. We further underline the importance of utilizing appropriate feature grouping strategies for MKL, which has not been directly addressed in the literature, and we propose a novel, automated feature grouping method that achieves a high classification accuracy for various MKL methods.

  16. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  17. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  18. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  19. SU-F-SPS-09: Parallel MC Kernel Calculations for VMAT Plan Improvement

    International Nuclear Information System (INIS)

    Chamberlain, S; French, S; Nazareth, D

    2016-01-01

    Purpose: Adding kernels (small perturbations in leaf positions) to the existing apertures of VMAT control points may improve plan quality. We investigate the calculation of kernel doses using a parallelized Monte Carlo (MC) method. Methods: A clinical prostate VMAT DICOM plan was exported from Eclipse. An arbitrary control point and leaf were chosen, and a modified MLC file was created, corresponding to the leaf position offset by 0.5cm. The additional dose produced by this 0.5 cm × 0.5 cm kernel was calculated using the DOSXYZnrc component module of BEAMnrc. A range of particle history counts were run (varying from 3 × 10"6 to 3 × 10"7); each job was split among 1, 10, or 100 parallel processes. A particle count of 3 × 10"6 was established as the lower range because it provided the minimal accuracy level. Results: As expected, an increase in particle counts linearly increases run time. For the lowest particle count, the time varied from 30 hours for the single-processor run, to 0.30 hours for the 100-processor run. Conclusion: Parallel processing of MC calculations in the EGS framework significantly decreases time necessary for each kernel dose calculation. Particle counts lower than 1 × 10"6 have too large of an error to output accurate dose for a Monte Carlo kernel calculation. Future work will investigate increasing the number of parallel processes and optimizing run times for multiple kernel calculations.

  20. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  1. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  2. Evaluation of scatter correction using a single isotope for simultaneous emission and transmission data

    Energy Technology Data Exchange (ETDEWEB)

    Yang, J.; Kuikka, J.T.; Vanninen, E.; Laensimies, E. [Kuopio Univ. Hospital (Finland). Dept. of Clinical Physiology and Nuclear Medicine; Kauppinen, T.; Patomaeki, L. [Kuopio Univ. (Finland). Dept. of Applied Physics

    1999-05-01

    Photon scatter is one of the most important factors degrading the quantitative accuracy of SPECT images. Many scatter correction methods have been proposed. The single isotope method was proposed by us. Aim: We evaluate the scatter correction method of improving the quality of images by acquiring emission and transmission data simultaneously with single isotope scan. Method: To evaluate the proposed scatter correction method, a contrast and linearity phantom was studied. Four female patients with fibromyalgia (FM) syndrome and four with chronic back pain (BP) were imaged. Grey-to-cerebellum (G/C) and grey-to-white matter (G/W) ratios were determined by one skilled operator for 12 regions of interest (ROIs) in each subject. Results: The linearity of activity response was improved after the scatter correction (r=0.999). The y-intercept value of the regression line was 0.036 (p<0.0001) after scatter correction and the slope was 0.954. Pairwise correlation indicated the agreement between nonscatter corrected and scatter corrected images. Reconstructed slices before and after scatter correction demonstrate a good correlation in the quantitative accuracy of radionuclide concentration. G/C values have significant correlation coefficients between original and corrected data. Conclusion: The transaxial images of human brain studies show that the scatter correction using single isotope in simultaneous transmission and emission tomography provides a good scatter compensation. The contrasts were increased on all 12 ROIs. The scatter compensation enhanced details of physiological lesions. (orig.) [Deutsch] Die Photonenstreuung gehoert zu den wichtigsten Faktoren, die die quantitative Genauigkeit von SPECT-Bildern vermindern. Es wurde eine ganze Reihe von Methoden zur Streuungskorrektur vorgeschlagen. Von uns wurde die Einzelisotopen-Methode empfohlen. Ziel: Wir untersuchten die Streuungskorrektur-Methode zur Verbesserung der Bildqualitaet durch simultane Gewinnung von Emissions

  3. Pareto-path multitask multiple kernel learning.

    Science.gov (United States)

    Li, Cong; Georgiopoulos, Michael; Anagnostopoulos, Georgios C

    2015-01-01

    A traditional and intuitively appealing Multitask Multiple Kernel Learning (MT-MKL) method is to optimize the sum (thus, the average) of objective functions with (partially) shared kernel function, which allows information sharing among the tasks. We point out that the obtained solution corresponds to a single point on the Pareto Front (PF) of a multiobjective optimization problem, which considers the concurrent optimization of all task objectives involved in the Multitask Learning (MTL) problem. Motivated by this last observation and arguing that the former approach is heuristic, we propose a novel support vector machine MT-MKL framework that considers an implicitly defined set of conic combinations of task objectives. We show that solving our framework produces solutions along a path on the aforementioned PF and that it subsumes the optimization of the average of objective functions as a special case. Using the algorithms we derived, we demonstrate through a series of experimental results that the framework is capable of achieving a better classification performance, when compared with other similar MTL approaches.

  4. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  5. Approximate N3LO Higgs-boson production cross section using physical-kernel constraints

    International Nuclear Information System (INIS)

    Florian, D. de; Moch, S.; Hamburg Univ.; Vogt, A.

    2014-08-01

    The single-logarithmic enhancement of the physical kernel for Higgs production by gluon-gluon fusion in the heavy top-quark limit is employed to derive the leading so far unknown contributions, ln 5,4,3 (1-z), to the N 3 LO coefficient function in the threshold expansion. Also using knowledge from Higgs-exchange DIS to estimate the remaining terms not vanishing for z=m 2 H /s→1, these results are combined with the recently completed soft+virtual contributions to provide an uncertainty band for the complete N 3 LO correction. For the 2008 MSTW parton distributions these N 3 LO contributions increase the cross section at 14 TeV by (10±2)% and (3±2.5)% for the standard choices μ R =m H and μ R =m H /2 of the renormalization scale. The remaining uncertainty arising from the hard-scattering cross sections can be quantified as no more than 5%, which is smaller than that due to the strong coupling and the parton distributions.

  6. Compactness and robustness: Applications in the solution of integral equations for chemical kinetics and electromagnetic scattering

    Science.gov (United States)

    Zhou, Yajun

    This thesis employs the topological concept of compactness to deduce robust solutions to two integral equations arising from chemistry and physics: the inverse Laplace problem in chemical kinetics and the vector wave scattering problem in dielectric optics. The inverse Laplace problem occurs in the quantitative understanding of biological processes that exhibit complex kinetic behavior: different subpopulations of transition events from the "reactant" state to the "product" state follow distinct reaction rate constants, which results in a weighted superposition of exponential decay modes. Reconstruction of the rate constant distribution from kinetic data is often critical for mechanistic understandings of chemical reactions related to biological macromolecules. We devise a "phase function approach" to recover the probability distribution of rate constants from decay data in the time domain. The robustness (numerical stability) of this reconstruction algorithm builds upon the continuity of the transformations connecting the relevant function spaces that are compact metric spaces. The robust "phase function approach" not only is useful for the analysis of heterogeneous subpopulations of exponential decays within a single transition step, but also is generalizable to the kinetic analysis of complex chemical reactions that involve multiple intermediate steps. A quantitative characterization of the light scattering is central to many meteoro-logical, optical, and medical applications. We give a rigorous treatment to electromagnetic scattering on arbitrarily shaped dielectric media via the Born equation: an integral equation with a strongly singular convolution kernel that corresponds to a non-compact Green operator. By constructing a quadratic polynomial of the Green operator that cancels out the kernel singularity and satisfies the compactness criterion, we reveal the universality of a real resonance mode in dielectric optics. Meanwhile, exploiting the properties of

  7. Accurate single-scattering simulation of ice cloud using the invariant-imbedding T-matrix method and the physical-geometric optics method

    Science.gov (United States)

    Sun, B.; Yang, P.; Kattawar, G. W.; Zhang, X.

    2017-12-01

    The ice cloud single-scattering properties can be accurately simulated using the invariant-imbedding T-matrix method (IITM) and the physical-geometric optics method (PGOM). The IITM has been parallelized using the Message Passing Interface (MPI) method to remove the memory limitation so that the IITM can be used to obtain the single-scattering properties of ice clouds for sizes in the geometric optics regime. Furthermore, the results associated with random orientations can be analytically achieved once the T-matrix is given. The PGOM is also parallelized in conjunction with random orientations. The single-scattering properties of a hexagonal prism with height 400 (in units of lambda/2*pi, where lambda is the incident wavelength) and an aspect ratio of 1 (defined as the height over two times of bottom side length) are given by using the parallelized IITM and compared to the counterparts using the parallelized PGOM. The two results are in close agreement. Furthermore, the integrated single-scattering properties, including the asymmetry factor, the extinction cross-section, and the scattering cross-section, are given in a completed size range. The present results show a smooth transition from the exact IITM solution to the approximate PGOM result. Because the calculation of the IITM method has reached the geometric regime, the IITM and the PGOM can be efficiently employed to accurately compute the single-scattering properties of ice cloud in a wide spectral range.

  8. Measuring the complex field scattered by single submicron particles

    Energy Technology Data Exchange (ETDEWEB)

    Potenza, Marco A. C., E-mail: marco.potenza@unimi.it; Sanvito, Tiziano [Department of Physics, University of Milan, via Celoria, 16 – I-20133 Milan (Italy); CIMAINA, University of Milan, via Celoria, 16 – I-20133 Milan (Italy); EOS s.r.l., viale Ortles 22/4, I-20139 Milan (Italy); Pullia, Alberto [Department of Physics, University of Milan, via Celoria, 16 – I-20133 Milan (Italy)

    2015-11-15

    We describe a method for simultaneous measurements of the real and imaginary parts of the field scattered by single nanoparticles illuminated by a laser beam, exploiting a self-reference interferometric scheme relying on the fundamentals of the Optical Theorem. Results obtained with calibrated spheres of different materials are compared to the expected values obtained through a simplified analytical model without any free parameters, and the method is applied to a highly polydisperse water suspension of Poly(D,L-lactide-co-glycolide) nanoparticles. Advantages with respect to existing methods and possible applications are discussed.

  9. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  10. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  11. The effect of scattering on single photon transmission of optical angular momentum

    International Nuclear Information System (INIS)

    Andrews, D L

    2011-01-01

    Schemes for the communication and registration of optical angular momentum depend on the fidelity of transmission between optical system components. It is known that electron spin can be faithfully relayed between exciton states in quantum dots; it has also been shown by several theoretical and experimental studies that the use of beams conveying orbital angular momentum can significantly extend the density and efficiency of such information transfer. However, it remains unclear to what extent the operation of such a concept at the single photon level is practicable—especially where this involves optical propagation through a material system, in which forward scattering events can intervene. The possibility of transmitting and decoding angular momentum over nanoscale distances itself raises other important issues associated with near-field interrogation. This paper provides a framework to address these and related issues. A quantum electrodynamical representation is constructed and used to pursue the consequences of individual photons, from a Laguerre–Gaussian beam, undergoing single and multiple scattering events in the course of propagation. In this context, issues concerning orbital angular momentum conservation, and its possible compromise, are tackled by identifying the relevant components of the electromagnetic scattering and coupling tensors, using an irreducible Cartesian basis. The physical interpretation broadly supports the fidelity of quantum information transmission, but it also identifies potential limitations of principle

  12. The effect of scattering on single photon transmission of optical angular momentum

    Science.gov (United States)

    Andrews, D. L.

    2011-06-01

    Schemes for the communication and registration of optical angular momentum depend on the fidelity of transmission between optical system components. It is known that electron spin can be faithfully relayed between exciton states in quantum dots; it has also been shown by several theoretical and experimental studies that the use of beams conveying orbital angular momentum can significantly extend the density and efficiency of such information transfer. However, it remains unclear to what extent the operation of such a concept at the single photon level is practicable—especially where this involves optical propagation through a material system, in which forward scattering events can intervene. The possibility of transmitting and decoding angular momentum over nanoscale distances itself raises other important issues associated with near-field interrogation. This paper provides a framework to address these and related issues. A quantum electrodynamical representation is constructed and used to pursue the consequences of individual photons, from a Laguerre-Gaussian beam, undergoing single and multiple scattering events in the course of propagation. In this context, issues concerning orbital angular momentum conservation, and its possible compromise, are tackled by identifying the relevant components of the electromagnetic scattering and coupling tensors, using an irreducible Cartesian basis. The physical interpretation broadly supports the fidelity of quantum information transmission, but it also identifies potential limitations of principle.

  13. Option Valuation with Volatility Components, Fat Tails, and Nonlinear Pricing Kernels

    DEFF Research Database (Denmark)

    Babaoglu, Kadir Gokhan; Christoffersen, Peter; Heston, Steven

    We nest multiple volatility components, fat tails and a U-shaped pricing kernel in a single option model and compare their contribution to describing returns and option data. All three features lead to statistically significant model improvements. A second volatility factor is economically most i...

  14. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  15. Roy-Steiner equations for pion-nucleon scattering

    Science.gov (United States)

    Ditsche, C.; Hoferichter, M.; Kubis, B.; Meißner, U.-G.

    2012-06-01

    Starting from hyperbolic dispersion relations, we derive a closed system of Roy-Steiner equations for pion-nucleon scattering that respects analyticity, unitarity, and crossing symmetry. We work out analytically all kernel functions and unitarity relations required for the lowest partial waves. In order to suppress the dependence on the high energy regime we also consider once- and twice-subtracted versions of the equations, where we identify the subtraction constants with subthreshold parameters. Assuming Mandelstam analyticity we determine the maximal range of validity of these equations. As a first step towards the solution of the full system we cast the equations for the π π to overline N N partial waves into the form of a Muskhelishvili-Omnès problem with finite matching point, which we solve numerically in the single-channel approximation. We investigate in detail the role of individual contributions to our solutions and discuss some consequences for the spectral functions of the nucleon electromagnetic form factors.

  16. Retrieving Single Scattering Albedos and Temperatures from CRISM Hyperspectral Data Using Neural Networks

    Science.gov (United States)

    He, L.; Arvidson, R. E.; O'Sullivan, J. A.

    2018-04-01

    We use a neural network (NN) approach to simultaneously retrieve surface single scattering albedos and temperature maps for CRISM data from 1.40 to 3.85 µm. It approximates the inverse of DISORT which simulates solar and emission radiative streams.

  17. Resonant stimulation of Raman scattering from single-crystal thiophene/phenylene co-oligomers

    International Nuclear Information System (INIS)

    Yanagi, Hisao; Marutani, Yusuke; Matsuoka, Naoki; Hiramatsu, Toru; Ishizumi, Atsushi; Sasaki, Fumio; Hotta, Shu

    2013-01-01

    Amplified Raman scattering was observed from single crystals of thiophene/phenylene co-oligomers (TPCOs). Under ns-pulsed excitation, the TPCO crystals exhibited amplified spontaneous emission (ASE) at resonant absorption wavelengths. With increasing excitation wavelength to the 0-0 absorption edge, the stimulated resonant Raman peaks appeared both in the 0-1 and 0-2 ASE band regions. When the excitation wavelength coincided with the 0-1 ASE band energy, the Raman peaks selectively appeared in the 0-2 ASE band. Such unusual enhancement of the 0-2 Raman scattering was ascribed to resonant stimulation via vibronic coupling with electronic transitions in the uniaxially oriented TPCO molecules

  18. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  19. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  20. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  1. Reproducing kernel method with Taylor expansion for linear Volterra integro-differential equations

    Directory of Open Access Journals (Sweden)

    Azizallah Alvandi

    2017-06-01

    Full Text Available This research aims of the present a new and single algorithm for linear integro-differential equations (LIDE. To apply the reproducing Hilbert kernel method, there is made an equivalent transformation by using Taylor series for solving LIDEs. Shown in series form is the analytical solution in the reproducing kernel space and the approximate solution $ u_{N} $ is constructed by truncating the series to $ N $ terms. It is easy to prove the convergence of $ u_{N} $ to the analytical solution. The numerical solutions from the proposed method indicate that this approach can be implemented easily which shows attractive features.

  2. A 3% Measurement of the Beam Normal Single Spin Asymmetry in Forward Angle Elastic Electron-Proton Scattering using the Qweak Setup

    Energy Technology Data Exchange (ETDEWEB)

    Waidyawansa, Dinayadura Buddhini [Ohio Univ., Athens, OH (United States)

    2013-08-01

    The beam normal single spin asymmetry generated in the scattering of transversely polarized electrons from unpolarized nucleons is an observable of the imaginary part of the two-photon exchange process. Moreover, it is a potential source of false asymmetry in parity violating electron scattering experiments. The Q{sub weak} experiment uses parity violating electron scattering to make a direct measurement of the weak charge of the proton. The targeted 4% measurement of the weak charge of the proton probes for parity violating new physics beyond the Standard Model. The beam normal single spin asymmetry at Q{sub weak} kinematics is at least three orders of magnitude larger than 5 ppb precision of the parity violating asymmetry. To better understand this parity conserving background, the Q{sub weak} Collaboration has performed elastic scattering measurements with fully transversely polarized electron beam on the proton and aluminum. This dissertation presents the analysis of the 3% measurement (1.3% statistical and 2.6% systematic) of beam normal single spin asymmetry in electronproton scattering at a Q2 of 0.025 (GeV/c)2. It is the most precise existing measurement of beam normal single spin asymmetry available at the time. A measurement of this precision helps to improve the theoretical models on beam normal single spin asymmetry and thereby our understanding of the doubly virtual Compton scattering process.

  3. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  4. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  5. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    Science.gov (United States)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  6. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction.

    Science.gov (United States)

    Bandeira E Sousa, Massaine; Cuevas, Jaime; de Oliveira Couto, Evellyn Giselly; Pérez-Rodríguez, Paulino; Jarquín, Diego; Fritsche-Neto, Roberto; Burgueño, Juan; Crossa, Jose

    2017-06-07

    Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1) single-environment, main genotypic effect model (SM); (2) multi-environment, main genotypic effects model (MM); (3) multi-environment, single variance G×E deviation model (MDs); and (4) multi-environment, environment-specific variance G×E deviation model (MDe). Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB), and a nonlinear kernel Gaussian kernel (GK). The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets), having different numbers of maize hybrids evaluated in different environments for grain yield (GY), plant height (PH), and ear height (EH). Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK) had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied. Copyright © 2017 Bandeira e Sousa et al.

  7. Genomic-Enabled Prediction in Maize Using Kernel Models with Genotype × Environment Interaction

    Directory of Open Access Journals (Sweden)

    Massaine Bandeira e Sousa

    2017-06-01

    Full Text Available Multi-environment trials are routinely conducted in plant breeding to select candidates for the next selection cycle. In this study, we compare the prediction accuracy of four developed genomic-enabled prediction models: (1 single-environment, main genotypic effect model (SM; (2 multi-environment, main genotypic effects model (MM; (3 multi-environment, single variance G×E deviation model (MDs; and (4 multi-environment, environment-specific variance G×E deviation model (MDe. Each of these four models were fitted using two kernel methods: a linear kernel Genomic Best Linear Unbiased Predictor, GBLUP (GB, and a nonlinear kernel Gaussian kernel (GK. The eight model-method combinations were applied to two extensive Brazilian maize data sets (HEL and USP data sets, having different numbers of maize hybrids evaluated in different environments for grain yield (GY, plant height (PH, and ear height (EH. Results show that the MDe and the MDs models fitted with the Gaussian kernel (MDe-GK, and MDs-GK had the highest prediction accuracy. For GY in the HEL data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 9 to 32%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 9 to 49%. For GY in the USP data set, the increase in prediction accuracy of SM-GK over SM-GB ranged from 0 to 7%. For the MM, MDs, and MDe models, the increase in prediction accuracy of GK over GB ranged from 34 to 70%. For traits PH and EH, gains in prediction accuracy of models with GK compared to models with GB were smaller than those achieved in GY. Also, these gains in prediction accuracy decreased when a more difficult prediction problem was studied.

  8. Multi-parameter Analysis and Inversion for Anisotropic Media Using the Scattering Integral Method

    KAUST Repository

    Djebbi, Ramzi

    2017-01-01

    the model. I study the prospect of applying a scattering integral approach for multi-parameter inversion for a transversely isotropic model with a vertical axis of symmetry. I mainly analyze the sensitivity kernels to understand the sensitivity of seismic

  9. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  10. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  11. Polarized Raman scattering of single ZnO nanorod

    International Nuclear Information System (INIS)

    Yu, J. L.; Lai, Y. F.; Wang, Y. Z.; Cheng, S. Y.; Chen, Y. H.

    2014-01-01

    Polarized Raman scattering measurement on single wurtzite c-plane (001) ZnO nanorod grown by hydrothermal method has been performed at room temperature. The polarization dependence of the intensity of the Raman scattering for the phonon modes A 1 (TO), E 1 (TO), and E 2 high in the ZnO nanorod are obtained. The deviations of polarization-dependent Raman spectroscopy from the prediction of Raman selection rules are observed, which can be attributed to the structure defects in the ZnO nanorod as confirmed by the comparison of the transmission electron microscopy, photoluminescence spectra as well as the polarization dependent Raman signal of the annealed and unannealed ZnO nanorod. The Raman tensor elements of A 1 (TO) and E 1 (TO) phonon modes normalized to that of the E 2 high phonon mode are |a/d|=0.32±0.01, |b/d|=0.49±0.02, and |c/d|=0.23±0.01 for the unannealed ZnO nanorod, and |a/d|=0.33±0.01, |b/d|=0.45±0.01, and |c/d|=0.20±0.01 for the annealed ZnO nanorod, which shows strong anisotropy compared to that of bulk ZnO epilayer

  12. Analytical equations for CT dose profiles derived using a scatter kernel of Monte Carlo parentage with broad applicability to CT dosimetry problems

    International Nuclear Information System (INIS)

    Dixon, Robert L.; Boone, John M.

    2011-01-01

    Purpose: Knowledge of the complete axial dose profile f(z), including its long scatter tails, provides the most complete (and flexible) description of the accumulated dose in CT scanning. The CTDI paradigm (including CTDI vol ) requires shift-invariance along z (identical dose profiles spaced at equal intervals), and is therefore inapplicable to many of the new and complex shift-variant scan protocols, e.g., high dose perfusion studies using variable (or zero) pitch. In this work, a convolution-based beam model developed by Dixon et al.[Med. Phys. 32, 3712-3728, (2005)] updated with a scatter LSF kernel (or DSF) derived from a Monte Carlo simulation by Boone [Med. Phys. 36, 4547-4554 (2009)] is used to create an analytical equation for the axial dose profile f(z) in a cylindrical phantom. Using f(z), equations are derived which provide the analytical description of conventional (axial and helical) dose, demonstrating its physical underpinnings; and likewise for the peak axial dose f(0) appropriate to stationary phantom cone beam CT, (SCBCT). The methodology can also be applied to dose calculations in shift-variant scan protocols. This paper is an extension of our recent work Dixon and Boone [Med. Phys. 37, 2703-2718 (2010)], which dealt only with the properties of the peak dose f(0), its relationship to CTDI, and its appropriateness to SCBCT. Methods: The experimental beam profile data f(z) of Mori et al.[Med. Phys. 32, 1061-1069 (2005)] from a 256 channel prototype cone beam scanner for beam widths (apertures) ranging from a = 28 to 138 mm are used to corroborate the theoretical axial profiles in a 32 cm PMMA body phantom. Results: The theoretical functions f(z) closely-matched the central axis experimental profile data 11 for all apertures (a = 28 -138 mm). Integration of f(z) likewise yields analytical equations for all the (CTDI-based) dosimetric quantities of conventional CT (including CTDI L itself) in addition to the peak dose f(0) relevant to SCBCT

  13. Comparison of scatter doses from a multislice and a single slice CT scanner

    International Nuclear Information System (INIS)

    Burrage, J. W.; Causer, D. A.

    2006-01-01

    During shielding calculations for a new multislice CT (MSCT) scanner it was found that the manufacturer's data indicated significantly higher external scatter doses than would be generated for a single slice CT (SSCT). Even allowing for increased beam width, the manufacturer's data indicated that the scatter dose per scan was higher by a factor of about 3 to 4. The magnitude of the discrepancy was contrary to expectations and also contrary to a statement by the UK ImPACT group, which indicated that when beam width is taken into account, the scatter doses should be similar. The matter was investigated by comparing scatter doses from an SSCT and an MSCT. Scatter measurements were performed at three points using a standard perspex CTDI phantom, and CT dose indices were also measured to compare scanner output. MSCT measurements were performed with a 40 mm wide beam, SSCT measurements with a 10 mm wide beam. A film badge survey was also performed after the installation of the MSCT scanner to assess the adequacy of lead shielding in the room. It was found that the scatter doses from the MSCT were lower than indicated by the manufacturer's data. MSCT scatter doses were approximately 4 times higher than those from the SSCT, consistent with expectations due to beam width differences. The CT dose indices were similar, and the film badge survey indicated that the existing shielding, which had been adequate for the SSCT, was also adequate for the MSCT

  14. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  15. Learning a peptide-protein binding affinity predictor with kernel ridge regression

    Science.gov (United States)

    2013-01-01

    Background The cellular function of a vast majority of proteins is performed through physical interactions with other biomolecules, which, most of the time, are other proteins. Peptides represent templates of choice for mimicking a secondary structure in order to modulate protein-protein interaction. They are thus an interesting class of therapeutics since they also display strong activity, high selectivity, low toxicity and few drug-drug interactions. Furthermore, predicting peptides that would bind to a specific MHC alleles would be of tremendous benefit to improve vaccine based therapy and possibly generate antibodies with greater affinity. Modern computational methods have the potential to accelerate and lower the cost of drug and vaccine discovery by selecting potential compounds for testing in silico prior to biological validation. Results We propose a specialized string kernel for small bio-molecules, peptides and pseudo-sequences of binding interfaces. The kernel incorporates physico-chemical properties of amino acids and elegantly generalizes eight kernels, comprised of the Oligo, the Weighted Degree, the Blended Spectrum, and the Radial Basis Function. We provide a low complexity dynamic programming algorithm for the exact computation of the kernel and a linear time algorithm for it’s approximation. Combined with kernel ridge regression and SupCK, a novel binding pocket kernel, the proposed kernel yields biologically relevant and good prediction accuracy on the PepX database. For the first time, a machine learning predictor is capable of predicting the binding affinity of any peptide to any protein with reasonable accuracy. The method was also applied to both single-target and pan-specific Major Histocompatibility Complex class II benchmark datasets and three Quantitative Structure Affinity Model benchmark datasets. Conclusion On all benchmarks, our method significantly (p-value ≤ 0.057) outperforms the current state-of-the-art methods at predicting

  16. DISCUS, Neutron Single to Double Scattering Ratio in Inelastic Scattering Experiment by Monte-Carlo

    International Nuclear Information System (INIS)

    Johnson, M.W.

    1993-01-01

    1 - Description of problem or function: DISCUS calculates the ratio of once-scattered to twice-scattered neutrons detected in an inelastic neutron scattering experiment. DISCUS also calculates the flux of once-scattered neutrons that would have been observed if there were no absorption in the sample and if, once scattered, the neutron would emerge without further re-scattering or absorption. Three types of sample geometry are used: an infinite flat plate, a finite flat plate or a finite length cylinder. (The infinite flat plate is included for comparison with other multiple scattering programs.) The program may be used for any sample for which the scattering law is of the form S(/Q/, omega). 2 - Method of solution: Monte Carlo with importance sampling is used. Neutrons are 'forced' both into useful angular trajectories, and useful energy bins. Biasing of the collision point according to the point of entry of the neutron into the sample is also utilised. The first and second order scattered neutron fluxes are calculated in independent histories. For twice-scattered neutron histories a square distribution in Q-omega space is used to sample the neutron coming from the first scattering event, whilst biasing is used for the second scattering event. (A square distribution is used so as to obtain reasonable inelastic-inelastic statistics.) 3 - Restrictions on the complexity of the problem: Unlimited number of detectors. Max. size of (Q, omega) matrix is 39*149. Max. number of points in momentum space for the scattering cross section is 199

  17. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  18. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  19. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  20. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  1. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  2. Modeling single-scattering properties of small cirrus particles by use of a size-shape distribution of ice spheroids and cylinders

    International Nuclear Information System (INIS)

    Liu Li; Mishchenko, Michael I.; Cairns, Brian; Carlson, Barbara E.; Travis, Larry D.

    2006-01-01

    In this study, we model single-scattering properties of small cirrus crystals using mixtures of polydisperse, randomly oriented spheroids and cylinders with varying aspect ratios and with a refractive index representative of water ice at a wavelength of 1.88 μm. The Stokes scattering matrix elements averaged over wide shape distributions of spheroids and cylinders are compared with those computed for polydisperse surface-equivalent spheres. The shape-averaged phase function for a mixture of oblate and prolate spheroids is smooth, featureless, and nearly flat at side-scattering angles and closely resembles those typically measured for cirrus. Compared with the ensemble-averaged phase function for spheroids, that for a shape distribution of cylinders shows a relatively deeper minimum at side-scattering angles. This may indicate that light scattering from realistic cirrus crystals can be better represented by a shape mixture of ice spheroids. Interestingly, the single-scattering properties of shape-averaged oblate and prolate cylinders are very similar to those of compact cylinders with a diameter-to-length ratio of unity. The differences in the optical cross sections, single-scattering albedo, and asymmetry parameter between the spherical and the nonspherical particles studied appear to be relatively small. This may suggest that for a given optical thickness, the influence of particle shape on the radiative forcing caused by a cloud composed of small ice crystals can be negligible

  3. Single-electron capture for 2-8 keV incident energy and direct scattering at 6 keV in He2+-He collisions

    International Nuclear Information System (INIS)

    Bordenave-Montesquieu, D.; Dagnac, R.

    1992-01-01

    We studied the single-electron capture as well as the direct processes occurring when a He 2+ ion is scattered by a He target. Doubly differential cross sections were measured for single-electron capture with a collision energy ranging from 2 to 8 keV and a scattering angle varying from 10' to 3 o 30' (laboratory frame). Single-electron capture into excited states of He + was found to be the dominant process, confirming a previous experimental study. Elastic scattering and ionization differential cross sections were measured for E = 6 keV. (Author)

  4. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  5. Quasi-four-particle first-order Faddeev-Watson-Lovelace terms in proton-helium scattering

    Science.gov (United States)

    Safarzade, Zohre; Akbarabadi, Farideh Shojaei; Fathi, Reza; Brunger, Michael J.; Bolorizadeh, Mohammad A.

    2017-06-01

    The Faddeev-Watson-Lovelace equations, which are typically used for solving three-particle scattering problems, are based on the assumption of target having one active electron while the other electrons remain passive during the collision process. So, in the case of protons scattering from helium or helium-like targets, in which there are two bound-state electrons, the passive electron has a static role in the collision channel to be studied. In this work, we intend to assign a dynamic role to all the target electrons, as they are physically active in the collision. By including an active role for the second electron in proton-helium-like collisions, a new form of the Faddeev-Watson-Lovelace integral equations is needed, in which there is no disconnected kernel. We consider the operators and the wave functions associated with the electrons to obey the Pauli exclusion principle, as the electrons are indistinguishable. In addition, a quasi-three-particle collision is assumed in the initial channel, where the electronic cloud is represented as a single identity in the collision.

  6. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  7. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  8. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo

    2016-01-01

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970

  9. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models

    Directory of Open Access Journals (Sweden)

    Jaime Cuevas

    2017-01-01

    Full Text Available The phenomenon of genotype × environment (G × E interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects ( u that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP and Gaussian (Gaussian kernel, GK. The other model has the same genetic component as the first model ( u plus an extra component, f, that captures random effects between environments that were not captured by the random effects u . We used five CIMMYT data sets (one maize and four wheat that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u   and   f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u .

  10. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  12. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  13. Robust organelle size extractions from elastic scattering measurements of single cells (Conference Presentation)

    Science.gov (United States)

    Cannaday, Ashley E.; Draham, Robert; Berger, Andrew J.

    2016-04-01

    The goal of this project is to estimate non-nuclear organelle size distributions in single cells by measuring angular scattering patterns and fitting them with Mie theory. Simulations have indicated that the large relative size distribution of organelles (mean:width≈2) leads to unstable Mie fits unless scattering is collected at polar angles less than 20 degrees. Our optical system has therefore been modified to collect angles down to 10 degrees. Initial validations will be performed on polystyrene bead populations whose size distributions resemble those of cell organelles. Unlike with the narrow bead distributions that are often used for calibration, we expect to see an order-of-magnitude improvement in the stability of the size estimates as the minimum angle decreases from 20 to 10 degrees. Scattering patterns will then be acquired and analyzed from single cells (EMT6 mouse cancer cells), both fixed and live, at multiple time points. Fixed cells, with no changes in organelle sizes over time, will be measured to determine the fluctuation level in estimated size distribution due to measurement imperfections alone. Subsequent measurements on live cells will determine whether there is a higher level of fluctuation that could be attributed to dynamic changes in organelle size. Studies on unperturbed cells are precursors to ones in which the effects of exogenous agents are monitored over time.

  14. Predicting complex traits using a diffusion kernel on genetic markers with an application to dairy cattle and wheat data

    Science.gov (United States)

    2013-01-01

    Background Arguably, genotypes and phenotypes may be linked in functional forms that are not well addressed by the linear additive models that are standard in quantitative genetics. Therefore, developing statistical learning models for predicting phenotypic values from all available molecular information that are capable of capturing complex genetic network architectures is of great importance. Bayesian kernel ridge regression is a non-parametric prediction model proposed for this purpose. Its essence is to create a spatial distance-based relationship matrix called a kernel. Although the set of all single nucleotide polymorphism genotype configurations on which a model is built is finite, past research has mainly used a Gaussian kernel. Results We sought to investigate the performance of a diffusion kernel, which was specifically developed to model discrete marker inputs, using Holstein cattle and wheat data. This kernel can be viewed as a discretization of the Gaussian kernel. The predictive ability of the diffusion kernel was similar to that of non-spatial distance-based additive genomic relationship kernels in the Holstein data, but outperformed the latter in the wheat data. However, the difference in performance between the diffusion and Gaussian kernels was negligible. Conclusions It is concluded that the ability of a diffusion kernel to capture the total genetic variance is not better than that of a Gaussian kernel, at least for these data. Although the diffusion kernel as a choice of basis function may have potential for use in whole-genome prediction, our results imply that embedding genetic markers into a non-Euclidean metric space has very small impact on prediction. Our results suggest that use of the black box Gaussian kernel is justified, given its connection to the diffusion kernel and its similar predictive performance. PMID:23763755

  15. A Tensor-Product-Kernel Framework for Multiscale Neural Activity Decoding and Control

    Science.gov (United States)

    Li, Lin; Brockmeier, Austin J.; Choi, John S.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2014-01-01

    Brain machine interfaces (BMIs) have attracted intense attention as a promising technology for directly interfacing computers or prostheses with the brain's motor and sensory areas, thereby bypassing the body. The availability of multiscale neural recordings including spike trains and local field potentials (LFPs) brings potential opportunities to enhance computational modeling by enriching the characterization of the neural system state. However, heterogeneity on data type (spike timing versus continuous amplitude signals) and spatiotemporal scale complicates the model integration of multiscale neural activity. In this paper, we propose a tensor-product-kernel-based framework to integrate the multiscale activity and exploit the complementary information available in multiscale neural activity. This provides a common mathematical framework for incorporating signals from different domains. The approach is applied to the problem of neural decoding and control. For neural decoding, the framework is able to identify the nonlinear functional relationship between the multiscale neural responses and the stimuli using general purpose kernel adaptive filtering. In a sensory stimulation experiment, the tensor-product-kernel decoder outperforms decoders that use only a single neural data type. In addition, an adaptive inverse controller for delivering electrical microstimulation patterns that utilizes the tensor-product kernel achieves promising results in emulating the responses to natural stimulation. PMID:24829569

  16. Scattering of atomic and molecular ions from single crystal surfaces of Cu, Ag and Fe

    International Nuclear Information System (INIS)

    Zoest, J.M. van.

    1986-01-01

    This thesis deals with analysis of crystal surfaces of Cu, Ag and Fe with Low Energy Ion scattering Spectroscopy (LEIS). Different atomic and molecular ions with fixed energies below 7 keV are scattered by a metal single crystal (with adsorbates). The energy and direction of the scattered particles are analysed for different selected charge states. In that way information can be obtained concerning the composition and atomic and electronic structure of the single crystal surface. Energy spectra contain information on the composition of the surface, while structural atomic information is obtained by direction measurements (photograms). In Ch.1 a description is given of the experimental equipment, in Ch.2 a characterization of the LEIS method. Ch.3 deals with the neutralization of keV-ions in surface scattering. Two different ways of data interpretation are presented. First a model is treated in which the observed directional dependence of neutralization action of the first atom layer of the surface is presented by a laterally varying thickness of the neutralizing layer. Secondly it is shown that the data can be reproduced by a more realistic, physical model based on atomic transition matrix elements. In Ch.4 the low energy hydrogen scattering is described. The study of the dissociation of H 2 + at an Ag surface r0230ted in a model based on electronic dissociation, initialized by electron capture into a repulsive (molecular) state. In Ch.5 finally the method is applied to the investigation of the surface structure of oxidized Fe. (Auth.)

  17. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  18. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  19. Decoupling single nanowire mobilities limited by surface scattering and bulk impurity scattering

    International Nuclear Information System (INIS)

    Khanal, D. R.; Levander, A. X.; Wu, J.; Yu, K. M.; Liliental-Weber, Z.; Walukiewicz, W.; Grandal, J.; Sanchez-Garcia, M. A.; Calleja, E.

    2011-01-01

    We demonstrate the isolation of two free carrier scattering mechanisms as a function of radial band bending in InN nanowires via universal mobility analysis, where effective carrier mobility is measured as a function of effective electric field in a nanowire field-effect transistor. Our results show that Coulomb scattering limits effective mobility at most effective fields, while surface roughness scattering only limits mobility under very high internal electric fields. High-energy α particle irradiation is used to vary the ionized donor concentration, and the observed decrease in mobility and increase in donor concentration are compared to Hall effect results of high-quality InN thin films. Our results show that for nanowires with relatively high doping and large diameters, controlling Coulomb scattering from ionized dopants should be given precedence over surface engineering when seeking to maximize nanowire mobility.

  20. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  1. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  2. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  3. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  4. Quantum scattering at low energies

    DEFF Research Database (Denmark)

    Derezinski, Jan; Skibsted, Erik

    For a class of negative slowly decaying potentials, including with , we study the quantum mechanical scattering theory in the low-energy regime. Using modifiers of the Isozaki--Kitada type we show that scattering theory is well behaved on the {\\it whole} continuous spectrum of the Hamiltonian......, including the energy . We show that the --matrices are well-defined and strongly continuous down to the zero energy threshold. Similarly, we prove that the wave matrices and generalized eigenfunctions are norm continuous down to the zero energy if we use appropriate weighted spaces. These results are used...... from positive energies to the limiting energy . This change corresponds to the behaviour of the classical orbits. Under stronger conditions we extract the leading term of the asymptotics of the kernel of at its singularities; this leading term defines a Fourier integral operator in the sense...

  5. Recent Advances and Open Questions in Neutrino-induced Quasi-elastic Scattering and Single Photon Production

    Energy Technology Data Exchange (ETDEWEB)

    Garvey, G. T. [Los Alamos; Harris, D. A. [Fermilab; Tanaka, H. A. [British Columbia U.; Tayloe, R. [Indiana U.; Zeller, G. P. [Fermilab

    2015-06-15

    The study of neutrino–nucleus interactions has recently seen rapid development with a new generation of accelerator-based neutrino experiments employing medium and heavy nuclear targets for the study of neutrino oscillations. A few unexpected results in the study of quasi-elastic scattering and single photon production have spurred a revisiting of the underlying nuclear physics and connections to electron–nucleus scattering. A thorough understanding and resolution of these issues is essential for future progress in the study of neutrino oscillations.

  6. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  7. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  8. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  9. The single-angle neutron scattering facility at Pelindaba

    International Nuclear Information System (INIS)

    Hofmeyr, C.; Mayer, R.M.; Tillwick, D.L.; Starkey, J.R.

    1978-05-01

    The small-angle neutron scattering facility at the SAFARI-1 reactor is described in detail, and with reference to theoretical and practical design considerations. Inexpensive copper microwave guides used as a guide-pipe for slow neutrons provided the basis for a useful though comparatively simple facility. The neutron-spectrum characteristics of the final facility in different configurations of the guide-pipe (both S and single-curved) agree wel with expected values based on results obtained with a test facility. The design, construction, installation and alignment of various components of the facility are outlined, as well as intensity optimisation. A general description is given of experimental procedures and data-aquisition electronics for the four-position sample holder and counter array of up to 18 3 He detectors and a beam monitor [af

  10. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  11. Quasiresonant scattering

    International Nuclear Information System (INIS)

    Hategan, Cornel; Comisel, Horia; Ionescu, Remus A.

    2004-01-01

    The quasiresonant scattering consists from a single channel resonance coupled by direct interaction transitions to some competing reaction channels. A description of quasiresonant Scattering, in terms of generalized reduced K-, R- and S- Matrix, is developed in this work. The quasiresonance's decay width is, due to channels coupling, smaller than the width of the ancestral single channel resonance (resonance's direct compression). (author)

  12. A calderón-preconditioned single source combined field integral equation for analyzing scattering from homogeneous penetrable objects

    KAUST Repository

    Valdé s, Felipe; Andriulli, Francesco P.; Bagci, Hakan; Michielssen, Eric

    2011-01-01

    A new regularized single source equation for analyzing scattering from homogeneous penetrable objects is presented. The proposed equation is a linear combination of a Calderón-preconditioned single source electric field integral equation and a

  13. Kernel Tuning and Nonuniform Influence on Optical and Electrochemical Gaps of Bimetal Nanoclusters.

    Science.gov (United States)

    He, Lizhong; Yuan, Jinyun; Xia, Nan; Liao, Lingwen; Liu, Xu; Gan, Zibao; Wang, Chengming; Yang, Jinlong; Wu, Zhikun

    2018-03-14

    Fine tuning nanoparticles with atomic precision is exciting and challenging and is critical for tuning the properties, understanding the structure-property correlation and determining the practical applications of nanoparticles. Some ultrasmall thiolated metal nanoparticles (metal nanoclusters) have been shown to be precisely doped, and even the protecting staple metal atom could be precisely reduced. However, the precise addition or reduction of the kernel atom while the other metal atoms in the nanocluster remain the same has not been successful until now, to the best of our knowledge. Here, by carefully selecting the protecting ligand with adequate steric hindrance, we synthesized a novel nanocluster in which the kernel can be regarded as that formed by the addition of two silver atoms to both ends of the Pt@Ag 12 icosohedral kernel of the Ag 24 Pt(SR) 18 (SR: thiolate) nanocluster, as revealed by single crystal X-ray crystallography. Interestingly, compared with the previously reported Ag 24 Pt(SR) 18 nanocluster, the as-obtained novel bimetal nanocluster exhibits a similar absorption but a different electrochemical gap. One possible explanation for this result is that the kernel tuning does not essentially change the electronic structure, but obviously influences the charge on the Pt@Ag 12 kernel, as demonstrated by natural population analysis, thus possibly resulting in the large electrochemical gap difference between the two nanoclusters. This work not only provides a novel strategy to tune metal nanoclusters but also reveals that the kernel change does not necessarily alter the optical and electrochemical gaps in a uniform manner, which has important implications for the structure-property correlation of nanoparticles.

  14. The studies of radiation distorations in CdS single crystals by using a proton back-scattering method

    International Nuclear Information System (INIS)

    Grigor'ev, A.N.; Dikij, N.P.; Matyash, P.P.; Nikolajchuk, L.I.; Pivovar, L.I.

    1974-01-01

    The radiation defects in semiconducting CdS single crystals induced during doping with 140 keV Na ions (10 15 -2.10 16 ion/cm 2 ) were studied by the orientation dependence of 700 keV proton backscattering. The absence of discrete peaks in the scattered proton eneryg spectra indicates a small contribution of direct scattering at large angles. The defects formed during doping increase the fractionof dechanneled particles, which are then scattered at large anlges. No amorphization of CdS was observed at high Na ion dose 2x10 16 ion/cm 2

  15. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  16. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  17. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  18. Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.

    Science.gov (United States)

    Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo

    2017-01-05

    The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.

  19. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Science.gov (United States)

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  1. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  2. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  3. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  4. IDENTIFICATION AND MAPPING OF A GENE FOR RICE SLENDER KERNEL USING Oryza glumaepatula INTROGRESSION LINES

    Directory of Open Access Journals (Sweden)

    Sobrizal Sobrizal

    2016-10-01

    Full Text Available World demand for superior rice grain quality tends to increase. One of the  criteria of appearance quality of rice grain is grain shape. Rice consumers  exhibit wide preferences for grain shape, but most Indonesian rice consumers prefer long and slender grain. The objectives of this study were to identify and map a gene for rice slender kernel trait using Oryza  glumaepatula introgression lines with O. sativa cv. Taichung 65 genetic background. A segregation analysis of BC4F2 population derived from backcrosses of a donor parent O. glumaepatula into a recurrent parent Taichung 65 showed that the slender kernel was controlled by a single recessive gene. This new identified gene was designated as sk1 (slender kernel 1. Moreover, based on the RFLP analyses using 14 RFLP markers located on chromosomes 2, 8, 9, and 10 in which the O. glumaepatula chromosomal segments were retained in BC4F2 population, the sk1 was located between RFLP markers C679 and C560 on the long arm of chromosome 2, with map distances of 2.8 and 1.5 cM, respectively. The wild rice O. glumaepatula carried a recessive allele for slender kernel. This allele may be useful in breeding of rice with slender kernel types. In addition, the development of plant materials and RFLP map associated with slender kernel in this study is the preliminary works in the effort to isolate this important grain shape gene.

  5. Thermal diffuse scattering in time-of-flight neutron diffraction studied on SBN single crystals

    International Nuclear Information System (INIS)

    Prokert, F.; Savenko, B.N.; Balagurov, A.M.

    1994-01-01

    At time-of-flight (TOF) diffractometer D N-2, installed at the pulsed reactor IBR-2 in Dubna, Sr x Ba 1-x Nb 2 O 6 mixed single crystals (SBN-x) of different compositions (0.50 < x< 0.75) were investigated between 15 and 773 K. The diffraction patterns were found to be strongly influenced by the thermal diffuse scattering (TDS). The appearance of the TDS from the long wavelength acoustic models of vibration in single crystals is characterized by the ratio of the velocity of sound to the velocity of neutron. Due to the nature of the TOF Laue diffraction technique used on D N-2, the TDS around Bragg peaks has rather a complex profile. An understanding of the TDS close to Bragg peaks is essential in allowing the extraction of the diffuse scattering occurring at the diffuse ferroelectric phase transition in SBN crystals. 11 refs.; 9 figs.; 1 tab. (author)

  6. Single-electron capture for 2-8 keV incident energy and direct scattering at 6 keV in He[sup 2+]-He collisions

    Energy Technology Data Exchange (ETDEWEB)

    Bordenave-Montesquieu, D.; Dagnac, R. (Toulouse-3 Univ., 31 (France). Centre de Physique Atomique)

    1992-06-14

    We studied the single-electron capture as well as the direct processes occurring when a He[sup 2+] ion is scattered by a He target. Doubly differential cross sections were measured for single-electron capture with a collision energy ranging from 2 to 8 keV and a scattering angle varying from 10' to 3[sup o]30' (laboratory frame). Single-electron capture into excited states of He[sup +] was found to be the dominant process, confirming a previous experimental study. Elastic scattering and ionization differential cross sections were measured for E = 6 keV. (Author).

  7. Q-branch Raman scattering and modern kinetic thoery

    Energy Technology Data Exchange (ETDEWEB)

    Monchick, L. [The Johns Hopkins Univ., Laurel, MD (United States)

    1993-12-01

    The program is an extension of previous APL work whose general aim was to calculate line shapes of nearly resonant isolated line transitions with solutions of a popular quantum kinetic equation-the Waldmann-Snider equation-using well known advanced solution techniques developed for the classical Boltzmann equation. The advanced techniques explored have been a BGK type approximation, which is termed the Generalized Hess Method (GHM), and conversion of the collision operator to a block diagonal matrix of symmetric collision kernels which then can be approximated by discrete ordinate methods. The latter method, which is termed the Collision Kernel method (CC), is capable of the highest accuracy and has been used quite successfully for Q-branch Raman scattering. The GHM method, not quite as accurate, is applicable over a wider range of pressures and has proven quite useful.

  8. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  9. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  10. Implementation of stimulated Raman scattering microscopy for single cell analysis

    Science.gov (United States)

    D'Arco, Annalisa; Ferrara, Maria Antonietta; Indolfi, Maurizio; Tufano, Vitaliano; Sirleto, Luigi

    2017-05-01

    In this work, we present successfully realization of a nonlinear microscope, not purchasable in commerce, based on stimulated Raman scattering. It is obtained by the integration of a femtosecond SRS spectroscopic setup with an inverted research microscope equipped with a scanning unit. Taking account of strength of vibrational contrast of SRS, it provides label-free imaging of single cell analysis. Validation tests on images of polystyrene beads are reported to demonstrate the feasibility of the approach. In order to test the microscope on biological structures, we report and discuss the label-free images of lipid droplets inside fixed adipocyte cells.

  11. Consistent Valuation across Curves Using Pricing Kernels

    Directory of Open Access Journals (Sweden)

    Andrea Macrina

    2018-03-01

    Full Text Available The general problem of asset pricing when the discount rate differs from the rate at which an asset’s cash flows accrue is considered. A pricing kernel framework is used to model an economy that is segmented into distinct markets, each identified by a yield curve having its own market, credit and liquidity risk characteristics. The proposed framework precludes arbitrage within each market, while the definition of a curve-conversion factor process links all markets in a consistent arbitrage-free manner. A pricing formula is then derived, referred to as the across-curve pricing formula, which enables consistent valuation and hedging of financial instruments across curves (and markets. As a natural application, a consistent multi-curve framework is formulated for emerging and developed inter-bank swap markets, which highlights an important dual feature of the curve-conversion factor process. Given this multi-curve framework, existing multi-curve approaches based on HJM and rational pricing kernel models are recovered, reviewed and generalised and single-curve models extended. In another application, inflation-linked, currency-based and fixed-income hybrid securities are shown to be consistently valued using the across-curve valuation method.

  12. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  13. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  14. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  15. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  16. Possibility of 1-nm level localization of a single molecule with gap-mode surface-enhanced Raman scattering

    International Nuclear Information System (INIS)

    Choi, Han Kyu; Kim, Zee Hwan

    2015-01-01

    The electromagnetic (EM) enhancement mechanism of surface-enhanced Raman scattering (SERS) has been well established through 30 years of extensive investigation: molecules adsorbed on resonantly driven silver or gold nanoparticles (NPs) experience strongly enhanced field and thus show enhanced Raman scattering. Even stronger SERS enhancement is possible with a gap structure in which two or more NPs form assemblies with gap sizes of 1 nm or less. We have theoretically shown that the measurement of SERS angular distribution can reveal the position of a single molecule near the gap with 1-nm accuracy, even though the spatial extent of the enhanced field is ~10 nm. Real implementation of such experiment requires extremely well-defined (preferably a single crystal) dimeric junctions. Nevertheless, the experiment will provide spatial as well as frequency domain information on single-molecule dynamics at metallic surfaces

  17. Minimally coupled N-particle scattering integral equations

    International Nuclear Information System (INIS)

    Kowalski, K.L.

    1977-01-01

    A concise formalism is developed which permits the efficient representation and generalization of several known techniques for deriving connected-kernel N-particle scattering integral equations. The methods of Kouri, Levin, and Tobocman and Bencze and Redish which lead to minimally coupled integral equations are of special interest. The introduction of channel coupling arrays is characterized in a general manner and the common base of this technique and that of the so-called channel coupling scheme is clarified. It is found that in the Bencze-Redish formalism a particular coupling array has a crucial function but one different from that of the arrays employed by Kouri, Levin, and Tobocman. The apparent dependence of the proof of the minimality of the Bencze-Redish integral equations upon the form of the inhomogeneous term in these equations is eliminated. This is achieved by an investigation of the full (nonminimal) Bencze-Redish kernel. It is shown that the second power of this operator is connected, a result which is needed for the full applicability of the Bencze-Redish formalism. This is used to establish the relationship between the existence of solutions to the homogeneous form of the minimal equations and eigenvalues of the full Bencze-Redish kernel

  18. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  19. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  20. Eucalyptus-Palm Kernel Oil Blends: A Complete Elimination of Diesel in a 4-Stroke VCR Diesel Engine

    Directory of Open Access Journals (Sweden)

    Srinivas Kommana

    2015-01-01

    Full Text Available Fuels derived from biomass are mostly preferred as alternative fuels for IC engines as they are abundantly available and renewable in nature. The objective of the study is to identify the parameters that influence gross indicated fuel conversion efficiency and how they are affected by the use of biodiesel relative to petroleum diesel. Important physicochemical properties of palm kernel oil and eucalyptus blend were experimentally evaluated and found within acceptable limits of relevant standards. As most of vegetable oils are edible, growing concern for trying nonedible and waste fats as alternative to petrodiesel has emerged. In present study diesel fuel is completely replaced by biofuels, namely, methyl ester of palm kernel oil and eucalyptus oil in various blends. Different blends of palm kernel oil and eucalyptus oil are prepared on volume basis and used as operating fuel in single cylinder 4-stroke variable compression ratio diesel engine. Performance and emission characteristics of these blends are studied by varying the compression ratio. In the present experiment methyl ester extracted from palm kernel oil is considered as ignition improver and eucalyptus oil is considered as the fuel. The blends taken are PKE05 (palm kernel oil 95 + eucalyptus 05, PKE10 (palm kernel oil 90 + eucalyptus 10, and PKE15 (palm kernel 85 + eucalyptus 15. The results obtained by operating with these fuels are compared with results of pure diesel; finally the most preferable combination and the preferred compression ratio are identified.

  1. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  2. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  3. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  4. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  5. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  6. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  7. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  8. ADSORPTION OF COPPER FROM AQUEOUS SOLUTION BY ELAIS GUINEENSIS KERNEL ACTIVATED CARBON

    Directory of Open Access Journals (Sweden)

    NAJUA DELAILA TUMIN

    2008-08-01

    Full Text Available In this study, a series of batch laboratory experiments were conducted in order to investigate the feasibility of Elais Guineensis kernel or known as palm kernel shell (PKS-based activated carbon for the removal of copper from aqueous solution by the adsorption process. Investigation was carried out by studying the influence of initial solution pH, adsorbent dosage and initial concentration of copper. The particle size of PKS used was categorized as PKS–M. All batch experiments were carried out at a constant temperature of 30°C (±2°C using mechanical shaker that operated at 100 rpm. The single component equilibrium data was analyzed using Langmuir, Freundlich, Redlich-Peterson, Temkin and Toth adsorption isotherms.

  9. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  10. Bound states and scattering in four-body systems

    International Nuclear Information System (INIS)

    Narodetsky, I.M.

    1979-01-01

    It is the purpose of this review to provide the clear and elementary introduction in the integral equation method and to demonstrate explicitely its usefulness for the physical applications. The existing results concerning the application of the integral equation technique for the four-nucleon bound states and scattering are reviewed.The treatment is based on the quasiparticle approach that permits the simple interpretation of the equations in terms of quasiparticle scattering. The mathematical basis for the quasiparticle approach is the Hilbert-Schmidt theorem of the Fredholm integral equation theory. This paper contains the detailed discussion of the Hilbert-Schmidt expansion as applied to the 2-particle amplitudes and to the 3 + 1 and 2 + 2 amplitudes which are the kernels of the four-body equations. The review contains essentially the discussion of the four-body quasiparticle equations and results obtained for bound states and scattering

  11. A successful experimental observation of double-photon Compton scattering of γ rays using a single γ detector

    International Nuclear Information System (INIS)

    Saddi, M.B.; Sandhu, B.S.; Singh, B.

    2006-01-01

    The phenomenon of double-photon Compton scattering has been successfully observed using a single γ detector, a technique avoiding the use of the complicated slow-fast coincidence set-up used till now for observing this higher-order process. Here doubly differential collision cross-sections integrated over the directions of one of the two final photons, the direction of other one being kept fixed, are measured experimentally for 0.662 MeV incident γ photons. The energy spectra of the detected photons are observed as a long tail to the single-photon Compton line on the lower side of the full energy peak in the recorded scattered energy spectrum. The present results are in agreement with theory of this process

  12. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  13. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  14. QTL Mapping of Kernel Number-Related Traits and Validation of One Major QTL for Ear Length in Maize.

    Science.gov (United States)

    Huo, Dongao; Ning, Qiang; Shen, Xiaomeng; Liu, Lei; Zhang, Zuxin

    2016-01-01

    The kernel number is a grain yield component and an important maize breeding goal. Ear length, kernel number per row and ear row number are highly correlated with the kernel number per ear, which eventually determines the ear weight and grain yield. In this study, two sets of F2:3 families developed from two bi-parental crosses sharing one inbred line were used to identify quantitative trait loci (QTL) for four kernel number-related traits: ear length, kernel number per row, ear row number and ear weight. A total of 39 QTLs for the four traits were identified in the two populations. The phenotypic variance explained by a single QTL ranged from 0.4% to 29.5%. Additionally, 14 overlapping QTLs formed 5 QTL clusters on chromosomes 1, 4, 5, 7, and 10. Intriguingly, six QTLs for ear length and kernel number per row overlapped in a region on chromosome 1. This region was designated qEL1.10 and was validated as being simultaneously responsible for ear length, kernel number per row and ear weight in a near isogenic line-derived population, suggesting that qEL1.10 was a pleiotropic QTL with large effects. Furthermore, the performance of hybrids generated by crossing 6 elite inbred lines with two near isogenic lines at qEL1.10 showed the breeding value of qEL1.10 for the improvement of the kernel number and grain yield of maize hybrids. This study provides a basis for further fine mapping, molecular marker-aided breeding and functional studies of kernel number-related traits in maize.

  15. Studies of isotopic defined hydrogen beams scattering from Pd single-crystal surfaces

    International Nuclear Information System (INIS)

    Varlam, Mihai; Steflea, Dumitru

    2001-01-01

    An experimental investigation of hydrogen isotopes interaction with Pd single-crystal surface has been carried out using molecular beam technique. The energy dependence of the sticking probability and its relation with the trapping probability into the precursor state is studied by integrating the scattered angular distribution of hydrogen Isotopic defined beams from Pd (111) surface in the 40-400 K surface temperature range. The dependence has been evaluated by defining hydrogen molecular beams with different isotopic concentration - from the natural one to the 5% D/(D+H) ratio - and for different incident energies. The beam was directed onto a single-crystal Pd (111) surface. In the paper, we report the experimental results and some considerations related to it. (authors)

  16. Studies of isotopic defined hydrogen beams scattering from Pd single-crystal surfaces

    International Nuclear Information System (INIS)

    Varlam, Mihai; Steflea, Dumitru

    1999-01-01

    An experimental investigation of hydrogen isotopes interaction with Pd single-crystal surfaces has been carried out using molecular beam technique. The energy dependence of the sticking probability and its relation with the trapping probability into the precursor state is studied by integrating the scattered angular distribution of hydrogen isotopic defined beams from Pd (111) surfaces in the 40 - 400 K surface temperature range. The dependence has been evaluated by defining hydrogen molecular beams with different isotopic concentration - from the natural one until 5% D/(D + H) and different incident energies and directed onto a single - crystal Pd (111) surface. In the paper, we report the experimental results and some considerations related to them. (authors)

  17. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  18. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  19. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  20. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  1. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  2. Systematic approach in optimizing numerical memory-bound kernels on GPU

    KAUST Repository

    Abdelfattah, Ahmad

    2013-01-01

    The use of GPUs has been very beneficial in accelerating dense linear algebra computational kernels (DLA). Many high performance numerical libraries like CUBLAS, MAGMA, and CULA provide BLAS and LAPACK implementations on GPUs as well as hybrid computations involving both, CPUs and GPUs. GPUs usually score better performance than CPUs for compute-bound operations, especially those characterized by a regular data access pattern. This paper highlights a systematic approach for efficiently implementing memory-bound DLA kernels on GPUs, by taking advantage of the underlying device\\'s architecture (e.g., high throughput). This methodology proved to outperform existing state-of-the-art GPU implementations for the symmetric matrix-vector multiplication (SYMV), characterized by an irregular data access pattern, in a recent work (Abdelfattah et. al, VECPAR 2012). We propose to extend this methodology to the general matrix-vector multiplication (GEMV) kernel. The performance results show that our GEMV implementation achieves better performance for relatively small to medium matrix sizes, making it very influential in calculating the Hessenberg and bidiagonal reductions of general matrices (radar applications), which are the first step toward computing eigenvalues and singular values, respectively. Considering small and medium size matrices (≤4500), our GEMV kernel achieves an average 60% improvement in single precision (SP) and an average 25% in double precision (DP) over existing open-source and commercial software solutions. These results improve reduction algorithms for both small and large matrices. The improved GEMV performances engender an averge 30% (SP) and 15% (DP) in Hessenberg reduction and up to 25% (SP) and 14% (DP) improvement for the bidiagonal reduction over the implementation provided by CUBLAS 5.0. © 2013 Springer-Verlag.

  3. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  4. New evaluation of thermal neutron scattering libraries for light and heavy water

    Directory of Open Access Journals (Sweden)

    Marquez Damian Jose Ignacio

    2017-01-01

    Full Text Available In order to improve the design and safety of thermal nuclear reactors and for verification of criticality safety conditions on systems with significant amount of fissile materials and water, it is necessary to perform high-precision neutron transport calculations and estimate uncertainties of the results. These calculations are based on neutron interaction data distributed in evaluated nuclear data libraries. To improve the evaluations of thermal scattering sub-libraries, we developed a set of thermal neutron scattering cross sections (scattering kernels for hydrogen bound in light water, and deuterium and oxygen bound in heavy water, in the ENDF-6 format from room temperature up to the critical temperatures of molecular liquids. The new evaluations were generated and processable with NJOY99 and also with NJOY-2012 with minor modifications (updates, and with the new version of NJOY-2016. The new TSL libraries are based on molecular dynamics simulations with GROMACS and recent experimental data, and result in an improvement of the calculation of single neutron scattering quantities. In this work, we discuss the importance of taking into account self-diffusion in liquids to accurately describe the neutron scattering at low neutron energies (quasi-elastic peak problem. To improve modeling of heavy water, it is important to take into account temperature-dependent static structure factors and apply Sköld approximation to the coherent inelastic components of the scattering matrix. The usage of the new set of scattering matrices and cross-sections improves the calculation of thermal critical systems moderated and/or reflected with light/heavy water obtained from the International Criticality Safety Benchmark Evaluation Project (ICSBEP handbook. For example, the use of the new thermal scattering library for heavy water, combined with the ROSFOND-2010 evaluation of the cross sections for deuterium, results in an improvement of the C/E ratio in 48 out of

  5. New evaluation of thermal neutron scattering libraries for light and heavy water

    Science.gov (United States)

    Marquez Damian, Jose Ignacio; Granada, Jose Rolando; Cantargi, Florencia; Roubtsov, Danila

    2017-09-01

    In order to improve the design and safety of thermal nuclear reactors and for verification of criticality safety conditions on systems with significant amount of fissile materials and water, it is necessary to perform high-precision neutron transport calculations and estimate uncertainties of the results. These calculations are based on neutron interaction data distributed in evaluated nuclear data libraries. To improve the evaluations of thermal scattering sub-libraries, we developed a set of thermal neutron scattering cross sections (scattering kernels) for hydrogen bound in light water, and deuterium and oxygen bound in heavy water, in the ENDF-6 format from room temperature up to the critical temperatures of molecular liquids. The new evaluations were generated and processable with NJOY99 and also with NJOY-2012 with minor modifications (updates), and with the new version of NJOY-2016. The new TSL libraries are based on molecular dynamics simulations with GROMACS and recent experimental data, and result in an improvement of the calculation of single neutron scattering quantities. In this work, we discuss the importance of taking into account self-diffusion in liquids to accurately describe the neutron scattering at low neutron energies (quasi-elastic peak problem). To improve modeling of heavy water, it is important to take into account temperature-dependent static structure factors and apply Sköld approximation to the coherent inelastic components of the scattering matrix. The usage of the new set of scattering matrices and cross-sections improves the calculation of thermal critical systems moderated and/or reflected with light/heavy water obtained from the International Criticality Safety Benchmark Evaluation Project (ICSBEP) handbook. For example, the use of the new thermal scattering library for heavy water, combined with the ROSFOND-2010 evaluation of the cross sections for deuterium, results in an improvement of the C/E ratio in 48 out of 65

  6. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  7. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  8. High-speed single-shot optical focusing through dynamic scattering media with full-phase wavefront shaping

    Science.gov (United States)

    Hemphill, Ashton S.; Shen, Yuecheng; Liu, Yan; Wang, Lihong V.

    2017-11-01

    In biological applications, optical focusing is limited by the diffusion of light, which prevents focusing at depths greater than ˜1 mm in soft tissue. Wavefront shaping extends the depth by compensating for phase distortions induced by scattering and thus allows for focusing light through biological tissue beyond the optical diffusion limit by using constructive interference. However, due to physiological motion, light scattering in tissue is deterministic only within a brief speckle correlation time. In in vivo tissue, this speckle correlation time is on the order of milliseconds, and so the wavefront must be optimized within this brief period. The speed of digital wavefront shaping has typically been limited by the relatively long time required to measure and display the optimal phase pattern. This limitation stems from the low speeds of cameras, data transfer and processing, and spatial light modulators. While binary-phase modulation requiring only two images for the phase measurement has recently been reported, most techniques require at least three frames for the full-phase measurement. Here, we present a full-phase digital optical phase conjugation method based on off-axis holography for single-shot optical focusing through scattering media. By using off-axis holography in conjunction with graphics processing unit based processing, we take advantage of the single-shot full-phase measurement while using parallel computation to quickly reconstruct the phase map. With this system, we can focus light through scattering media with a system latency of approximately 9 ms, on the order of the in vivo speckle correlation time.

  9. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn by Measurement of Oil Content.

    Directory of Open Access Journals (Sweden)

    Hongzhi Wang

    Full Text Available One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed.

  10. Fully-Automated High-Throughput NMR System for Screening of Haploid Kernels of Maize (Corn) by Measurement of Oil Content

    Science.gov (United States)

    Xu, Xiaoping; Huang, Qingming; Chen, Shanshan; Yang, Peiqiang; Chen, Shaojiang; Song, Yiqiao

    2016-01-01

    One of the modern crop breeding techniques uses doubled haploid plants that contain an identical pair of chromosomes in order to accelerate the breeding process. Rapid haploid identification method is critical for large-scale selections of double haploids. The conventional methods based on the color of the endosperm and embryo seeds are slow, manual and prone to error. On the other hand, there exists a significant difference between diploid and haploid seeds generated by high oil inducer, which makes it possible to use oil content to identify the haploid. This paper describes a fully-automated high-throughput NMR screening system for maize haploid kernel identification. The system is comprised of a sampler unit to select a single kernel to feed for measurement of NMR and weight, and a kernel sorter to distribute the kernel according to the measurement result. Tests of the system show a consistent accuracy of 94% with an average screening time of 4 seconds per kernel. Field test result is described and the directions for future improvement are discussed. PMID:27454427

  11. Search for the Single Production of Doubly-Charged Higgs Bosons and Constraints on their Couplings from Bhabha Scattering

    CERN Document Server

    Abbiendi, G; Akesson, P.F.; Alexander, G.; Allison, John; Amaral, P.; Anagnostou, G.; Anderson, K.J.; Arcelli, S.; Asai, S.; Axen, D.; Azuelos, G.; Bailey, I.; Barberio, E.; Barlow, R.J.; Batley, R.J.; Bechtle, P.; Behnke, T.; Bell, Kenneth Watson; Bell, P.J.; Bella, G.; Bellerive, A.; Benelli, G.; Bethke, S.; Biebel, O.; Boeriu, O.; Bock, P.; Boutemeur, M.; Braibant, S.; Brigliadori, L.; Brown, Robert M.; Buesser, K.; Burckhart, H.J.; Campana, S.; Carnegie, R.K.; Caron, B.; Carter, A.A.; Carter, J.R.; Chang, C.Y.; Charlton, David G.; Csilling, A.; Cuffiani, M.; Dado, S.; De Roeck, A.; De Wolf, E.A.; Desch, K.; Dienes, B.; Donkers, M.; Dubbert, J.; Duchovni, E.; Duckeck, G.; Duerdoth, I.P.; Etzion, E.; Fabbri, F.; Feld, L.; Ferrari, P.; Fiedler, F.; Fleck, I.; Ford, M.; Frey, A.; Furtjes, A.; Gagnon, P.; Gary, John William; Gaycken, G.; Geich-Gimbel, C.; Giacomelli, G.; Giacomelli, P.; Giunta, Marina; Goldberg, J.; Groll, M.; Gross, E.; Grunhaus, J.; Gruwe, M.; Gunther, P.O.; Gupta, A.; Hajdu, C.; Hamann, M.; Hanson, G.G.; Harder, K.; Harel, A.; Harin-Dirac, M.; Hauschild, M.; Hawkes, C.M.; Hawkings, R.; Hemingway, R.J.; Hensel, C.; Herten, G.; Heuer, R.D.; Hill, J.C.; Hoffman, Kara Dion; Horvath, D.; Igo-Kemenes, P.; Ishii, K.; Jeremie, H.; Jovanovic, P.; Junk, T.R.; Kanaya, N.; Kanzaki, J.; Karapetian, G.; Karlen, D.; Kawagoe, K.; Kawamoto, T.; Keeler, R.K.; Kellogg, R.G.; Kennedy, B.W.; Kim, D.H.; Klein, K.; Klier, A.; Kluth, S.; Kobayashi, T.; Kobel, M.; Komamiya, S.; Kormos, Laura L.; Kramer, T.; Krieger, P.; von Krogh, J.; Kruger, K.; Kuhl, T.; Kupper, M.; Lafferty, G.D.; Landsman, H.; Lanske, D.; Layter, J.G.; Leins, A.; Lellouch, D.; Lettso, J.; Levinson, L.; Lillich, J.; Lloyd, S.L.; Loebinger, F.K.; Lu, J.; Ludwig, J.; Macpherson, A.; Mader, W.; Marcellini, S.; Martin, A.J.; Masetti, G.; Mashimo, T.; Mattig, Peter; McDonald, W.J.; McKenna, J.; McMahon, T.J.; McPherson, R.A.; Meijers, F.; Menges, W.; Merritt, F.S.; Mes, H.; Michelini, A.; Mihara, S.; Mikenberg, G.; Miller, D.J.; Moed, S.; Mohr, W.; Mori, T.; Mutter, A.; Nagai, K.; Nakamura, I.; Nanjo, H.; Neal, H.A.; Nisius, R.; O'Neale, S.W.; Oh, A.; Okpara, A.; Oreglia, M.J.; Orito, S.; Pahl, C.; Pasztor, G.; Pater, J.R.; Patrick, G.N.; Pilcher, J.E.; Pinfold, J.; Plane, David E.; Poli, B.; Polok, J.; Pooth, O.; Przybycien, M.; Quadt, A.; Rabbertz, K.; Rembser, C.; Renkel, P.; Roney, J.M.; Rosati, S.; Rozen, Y.; Runge, K.; Sachs, K.; Saeki, T.; Sarkisyan, E.K.G.; Schaile, A.D.; Schaile, O.; Scharff-Hansen, P.; Schieck, J.; Schoerner-Sadenius, Thomas; Schroder, Matthias; Schumacher, M.; Schwick, C.; Scott, W.G.; Seuster, R.; Shears, T.G.; Shen, B.C.; Sherwood, P.; Siroli, G.; Skuja, A.; Smith, A.M.; Sobie, R.; Soldner-Rembold, S.; Spano, F.; Stahl, A.; Stephens, K.; Strom, David M.; Strohmer, R.; Tarem, S.; Tasevsky, M.; Taylor, R.J.; Teuscher, R.; Thomson, M.A.; Torrence, E.; Toya, D.; Tran, P.; Trigger, I.; Trocsanyi, Z.; Tsur, E.; Turner-Watson, M.F.; Ueda, I.; Ujvari, B.; Vollmer, C.F.; Vannerem, P.; Vertesi, R.; Verzocchi, M.; Voss, H.; Vossebeld, J.; Waller, D.; Ward, C.P.; Ward, D.R.; Watkins, P.M.; Watson, A.T.; Watson, N.K.; Wells, P.S.; Wengler, T.; Wermes, N.; Wetterling, G.W.; Wilson, D.; Wilson, J.A.; Wolf, G.; Wyatt, T.R.; Yamashita, S.; Zer-Zion, D.; Zivkovic, Lidija

    2003-01-01

    A search for single production of doubly-charged Higgs bosons has been performed using 600.7 pb^-1 of e+e- collision data with sqrt(s)=189--209GeV collected by the OPAL detector at LEP. No evidence for the existence of H++/-- is observed. Upper limits on the Yukawa coupling of the H++/-- to like-signed electron pairs are derived. Additionally, indirect constraints on the Yukawa coupling from Bhabha scattering, where the H++/-- would contribute via t-channel exchange, are derived for M(H++/--) < 2TeV. These are the first results for both a single production search and constraints from Bhabha scattering reported from LEP.

  12. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  13. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  14. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  15. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  16. Anisotropic kernel p(μ → μ') for transport calculations of elastically scattered neutrons

    International Nuclear Information System (INIS)

    Stevenson, B.

    1985-01-01

    Literature in the area of anisotropic neutron scattering is by no means lacking. Attention, however, is usually devoted to solution of some particular neutron transport problem and the model employed is at best approximate. The present approach to the problem in general is classically exact and may be of some particular value to individuals seeking exact numerical results in transport calculations. For attempts neutrons originally directed toward the unit vector Omega, it attempts the evaluation of p(theta'), defined such that p(theta') d theta' is that fraction of scattered neutrons that emerges in the vicinity of a cone i.e., having been scattered to between angles theta' and theta' + d theta' with the axis of preferred orientation i; Omega makes an angle theta with i. The relative simplicity of the final form of the solution for hydrogen, in spite of the complicated nature of the limits involved, is a trade-off that truly is not necessary. The exact general solution presented here in integral form, has exceedingly simple limits, i.e., 0 ≤ theta' ≤ π regardless of the material involved; but the form of the final solution is extraordinarily complicated

  17. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  18. The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels

    International Nuclear Information System (INIS)

    Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi

    2015-01-01

    The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)

  19. Single-Fiber Reflectance Spectroscopy of Isotropic-Scattering Medium: An Analytic Perspective to the Ratio-of-Remission in Steady-State Measurements

    Directory of Open Access Journals (Sweden)

    Daqing Piao

    2014-12-01

    Full Text Available Recent focused Monte Carlo and experimental studies on steady-state single-fiber reflectance spectroscopy (SfRS from a biologically relevant scattering medium have revealed that, as the dimensionless reduced scattering of the medium increases, the SfRS intensity increases monotonically until reaching a plateau. The SfRS signal is semi-empirically decomposed to the product of three contributing factors, including a ratio-of-remission (RoR term that refers to the ratio of photons remitting from the medium and crossing the fiber-medium interface over the total number of photons launched into the medium. The RoR is expressed with respect to the dimensionless reduced scattering parameter , where  is the reduced scattering coefficient of the medium and  is the diameter of the probing fiber. We develop in this work, under the assumption of an isotropic-scattering medium, a method of analytical treatment that will indicate the pattern of RoR as a function of the dimensionless reduced scattering of the medium. The RoR is derived in four cases, corresponding to in-medium (applied to interstitial probing of biological tissue or surface-based (applied to contact-probing of biological tissue SfRS measurements using straight-polished or angle-polished fiber. The analytically arrived surface-probing RoR corresponding to single-fiber probing using a 15° angle-polished fiber over the range of  agrees with previously reported similarly configured experimental measurement from a scattering medium that has a Henyey–Greenstein scattering phase function with an anisotropy factor of 0.8. In cases of a medium scattering light anisotropically, we propose how the treatment may be furthered to account for the scattering anisotropy using the result of a study of light scattering close to the point-of-entry by Vitkin et al. (Nat. Commun. 2011, doi:10.1038/ncomms1599.

  20. Single-scattering properties of ice particles in the microwave regime: Temperature effect on the ice refractive index with implications in remote sensing

    International Nuclear Information System (INIS)

    Ding, Jiachen; Bi, Lei; Yang, Ping; Kattawar, George W.; Weng, Fuzhong; Liu, Quanhua; Greenwald, Thomas

    2017-01-01

    An ice crystal single-scattering property database is developed in the microwave spectral region (1 to 874 GHz) to provide the scattering, absorption, and polarization properties of 12 ice crystal habits (10-plate aggregate, 5-plate aggregate, 8-column aggregate, solid hexagonal column, hollow hexagonal column, hexagonal plate, solid bullet rosette, hollow bullet rosette, droxtal, oblate spheroid, prolate spheroid, and sphere) with particle maximum dimensions from 2 µm to 10 mm. For each habit, four temperatures (160, 200, 230, and 270 K) are selected to account for temperature dependence of the ice refractive index. The microphysical and scattering properties include projected area, volume, extinction efficiency, single-scattering albedo, asymmetry factor, and six independent nonzero phase matrix elements (i.e. P_1_1, P_1_2, P_2_2, P_3_3, P_4_3 and P_4_4). The scattering properties are computed by the Invariant Imbedding T-Matrix (II-TM) method and the Improved Geometric Optics Method (IGOM). The computation results show that the temperature dependence of the ice single-scattering properties in the microwave region is significant, particularly at high frequencies. Potential active and passive remote sensing applications of the database are illustrated through radar reflectivity and radiative transfer calculations. For cloud radar applications, ignoring temperature dependence has little effect on ice water content measurements. For passive microwave remote sensing, ignoring temperature dependence may lead to brightness temperature biases up to 5 K in the case of a large ice water path. - Highlights: • Single-scattering properties of ice crystals are computed from 1 to 874 GHz. • Ice refractive index temperature dependence is considered at 160, 200, 230 and 270 K. • Potential applications of the database to microwave remote sensing are illustrated. • Ignoring temperature dependence of ice refractive index can lead to 5 K difference in IWP retrieval

  1. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  2. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  3. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  4. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  5. Transfer by anisotropic scattering between subsets of the unit sphere of directions in linear transport theory

    International Nuclear Information System (INIS)

    Trombetti, T.

    1990-01-01

    The exact kernel method is presented for linear transport problems with azimuth-dependent angular fluxes. It is based on the evaluation of average scattering densities (ASD's) that fully describe the neutron (or particle) transfer between subsets of the unit sphere of directions by anisotropic scattering. Reciprocity and other ASD functional properties are proved and combined with the symmetry properties of suitable SN quadrature sets. This greatly reduces the number of independent ASD's to be computed and stored. An approach for performing ASD computations with reciprocity checks is presented. ASD expressions of the scattering source for typical 2D geometries are explicitly given. (author)

  6. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  7. The scattering of low energy helium ions and atoms from a copper single crystal, ch. 2

    International Nuclear Information System (INIS)

    Verheij, L.K.; Poelsema, B.; Boers, A.L.

    1976-01-01

    The scattering of 4-10 keV helium ions from a copper surface cannot be completely described with elastic, single collisions. The general behaviour of the measured energy and width of the surface peak can be explained by differences in inelastic energy losses for scattering from an ideal surface and from surface structures (damage). Multiple scattering effects have a minor influence. Additional information about the inelastic processes is obtained from scattering experiments with a primary atom beam. For large angles of incidence, the energy of the reflected ions is reduced about 20 eV if the primary beam consists of atoms instead of ions. An explanation of this effect and an explanation of the different behaviour of small angles is given. In the investigated energy range, the electronic stopping power might depend on the charge state of the primary particles. The experimental results are rather well explained by the Lindhard, Scharff, Schioett theory

  8. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  9. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  10. Analyses of the energy-dependent single separable potential models for the NN scattering

    International Nuclear Information System (INIS)

    Ahmad, S.S.; Beghi, L.

    1981-08-01

    Starting from a systematic study of the salient features regarding the quantum-mechanical two-particle scattering off an energy-dependent (ED) single separable potential and its connection with the rank-2 energy-independent (EI) separable potential in the T-(K-) amplitude formulation, the present status of the ED single separable potential models due to Tabakin (M1), Garcilazo (M2) and Ahmad (M3) has been discussed. It turned out that the incorporation of a self-consistent optimization procedure improves considerably the results of the 1 S 0 and 3 S 1 scattering phase shifts for the models (M2) and (M3) up to the CM wave number q=2.5 fm -1 , although the extrapolation of the results up to q=10 fm -1 reveals that the two models follow the typical behaviour of the well-known super-soft core potentials. It has been found that a variant of (M3) - i.e. (M4) involving one more parameter - gives the phase shifts results which are generally in excellent agreement with the data up to q=2.5 fm -1 and the extrapolation of the results for the 1 S 0 case in the higher wave number range not only follows the corresponding data qualitatively but also reflects a behaviour similar to the Reid soft core and Hamada-Johnston potentials together with a good agreement with the recent [4/3] Pade fits. A brief discussion regarding the features resulting from the variations in the ED parts of all the four models under consideration and their correlations with the inverse scattering theory methodology concludes the paper. (author)

  11. Coherent Anti-Stokes and Coherent Stokes in Raman Scattering by Superconducting Nanowire Single-Photon Detector for Temperature Measurement

    Directory of Open Access Journals (Sweden)

    Annepu Venkata Naga Vamsi

    2016-01-01

    Full Text Available We have reported the measurement of temperature by using coherent anti-Stroke and coherent Stroke Raman scattering using superconducting nano wire single-photon detector. The measured temperatures by both methods (Coherent Anti-Raman scattering & Coherent Stroke Raman scattering and TC 340 are in good accuracy of ± 5 K temperature range. The length of the pipe line under test can be increased by increasing the power of the pump laser. This methodology can be widely used to measure temperatures at instantaneous positions in test pipe line or the entire temperature of the pipe line under test.

  12. Effect of the X-ray scattering anisotropy on the diffusion of photons in the frame of the transport theory

    International Nuclear Information System (INIS)

    Fernandez, J.E.; Molinari, V.G.; Sumini, M.

    1988-01-01

    In the frame of the multiple applications of X-ray techniques a detailed description of the photon transport under several boundary conditions in condensed media is of utmost importance. In this work the photon transport equation for a homogeneous specimen of infinite thickness is considered and an exact iterative solution is reported, which is universally valid for all types of interactions because of its independence of the shape of the interaction kernel. As a test probe we use a specially simple elastic scattering expression that renders possible the exact calculation of the first two orders of the solution. It is shown that the second order does not produce any significant improvement over the first one. Due to its particular characteristics, the first-order solution for the simplified kernel can be extended to include the form factor, thus giving a more realistic description of the coherent scattering of monochromatic radiation by bound electrons. The relevant effects of the scattering anisotropy are also placed in evidence when they are constrated with the isotropic solution calculated in the same way. (author) [pt

  13. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  14. Measurements of Nascent Soot Using a Cavity Attenauted Phase Shift (CAPS)-based Single Scattering Albedo Monitor

    Science.gov (United States)

    Freedman, A.; Onasch, T. B.; Renbaum-Wollf, L.; Lambe, A. T.; Davidovits, P.; Kebabian, P. L.

    2015-12-01

    Accurate, as compared to precise, measurement of aerosol absorption has always posed a significant problem for the particle radiative properties community. Filter-based instruments do not actually measure absorption but rather light transmission through the filter; absorption must be derived from this data using multiple corrections. The potential for matrix-induced effects is also great for organic-laden aerosols. The introduction of true in situ measurement instruments using photoacoustic or photothermal interferometric techniques represents a significant advance in the state-of-the-art. However, measurement artifacts caused by changes in humidity still represent a significant hurdle as does the lack of a good calibration standard at most measurement wavelengths. And, in the absence of any particle-based absorption standard, there is no way to demonstrate any real level of accuracy. We, along with others, have proposed that under the circumstance of low single scattering albedo (SSA), absorption is best determined by difference using measurement of total extinction and scattering. We discuss a robust, compact, field deployable instrument (the CAPS PMssa) that simultaneously measures airborne particle light extinction and scattering coefficients and thus the single scattering albedo (SSA) on the same sample volume. The extinction measurement is based on cavity attenuated phase shift (CAPS) techniques as employed in the CAPS PMex particle extinction monitor; scattering is measured using integrating nephelometry by incorporating a Lambertian integrating sphere within the sample cell. The scattering measurement is calibrated using the extinction measurement of non-absorbing particles. For small particles and low SSA, absorption can be measured with an accuracy of 6-8% at absorption levels as low as a few Mm-1. We present new results of the measurement of the mass absorption coefficient (MAC) of soot generated by an inverted methane diffusion flame at 630 nm. A value

  15. Test of Mie-based single-scattering properties of non-spherical dust aerosols in radiative flux calculations

    International Nuclear Information System (INIS)

    Fu, Q.; Thorsen, T.J.; Su, J.; Ge, J.M.; Huang, J.P.

    2009-01-01

    We simulate the single-scattering properties (SSPs) of dust aerosols with both spheroidal and spherical shapes at a wavelength of 0.55 μm for two refractive indices and four effective radii. Herein spheres are defined by preserving both projected area and volume of a non-spherical particle. It is shown that the relative errors of the spheres to approximate the spheroids are less than 1% in the extinction efficiency and single-scattering albedo, and less than 2% in the asymmetry factor. It is found that the scattering phase function of spheres agrees with spheroids better than the Henyey-Greenstein (HG) function for the scattering angle range of 0-90 o . In the range of ∼90-180 o , the HG function is systematically smaller than the spheroidal scattering phase function while the spherical scattering phase function is smaller from ∼90 o to 145 o but larger from ∼145 o to 180 o . We examine the errors in reflectivity and absorptivity due to the use of SSPs of equivalent spheres and HG functions for dust aerosols. The reference calculation is based on the delta-DISORT-256-stream scheme using the SSPs of the spheroids. It is found that the errors are mainly caused by the use of the HG function instead of the SSPs for spheres. By examining the errors associated with the delta-four- and delta-two-stream schemes using various approximate SSPs of dust aerosols, we find that the errors related to the HG function dominate in the delta-four-stream results, while the errors related to the radiative transfer scheme dominate in the delta-two-stream calculations. We show that the relative errors in the global reflectivity due to the use of sphere SSPs are always less than 5%. We conclude that Mie-based SSPs of non-spherical dust aerosols are well suited in radiative flux calculations.

  16. Kernel-based noise filtering of neutron detector signals

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki

    2007-01-01

    This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests

  17. Nodal structure and phase shifts of zero-incident-energy wave functions: Multiparticle single-channel scattering

    International Nuclear Information System (INIS)

    Iwinski, Z.R.; Rosenberg, L.; Spruch, L.

    1986-01-01

    For potential scattering, with delta/sub L/(k) the phase shift modulo π for an incident wave number k, Levinson's theorem gives delta/sub L/(0)-delta/sub L/(infinity) in terms of N/sub L/, the number of bound states of angular momentum L, for delta/sub L/(k) assumed to be a continuous function of k. N/sub L/ also determines the number of nodes of the zero-energy wave function u/sub L/(r). A knowledge of the nodal structure and of the absolute value of delta/sub L/(0) is very useful in theoretical studies of low-energy potential scattering. Two preliminary attempts, one formal and one ''physical,'' are made to extend the above results to single-channel scattering by a compound system initially in its ground state. The nodal structure will be of greater interest to us here than an extension of Levinson's theorem

  18. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    Science.gov (United States)

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  20. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  1. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  2. Calculation of the thermal utilization factor in a heterogeneous slab cell scattering neutrons anisotropically

    Energy Technology Data Exchange (ETDEWEB)

    Abdallah, A M; Elsherbiny, E M; Sobhy, M [Reactor departement, nuclear research centre, Inshaas, (Egypt)

    1995-10-01

    The P{sub n}-spatial expansion method has been used for calculating the one speed transport utilization factor in heterogenous slab cells in which neutrons may scatter anisotropically; by considering the P{sup 1-} approximation with a two-term scattering kernel in both the fuel and moderator regions, an analytical expression for the disadvantage factor has been derived. The numerical results obtained have been shown to be much better than those calculated by the usual P{sup 1-} and P{sup 3-} approximations and comparable with those obtained by some exact methods. 3 tabs.

  3. Non-Markovian dynamics of a qubit due to single-photon scattering in a waveguide

    Science.gov (United States)

    Fang, Yao-Lung L.; Ciccarello, Francesco; Baranger, Harold U.

    2018-04-01

    We investigate the open dynamics of a qubit due to scattering of a single photon in an infinite or semi-infinite waveguide. Through an exact solution of the time-dependent multi-photon scattering problem, we find the qubit's dynamical map. Tools of open quantum systems theory allow us then to show the general features of this map, find the corresponding non-Linbladian master equation, and assess in a rigorous way its non-Markovian nature. The qubit dynamics has distinctive features that, in particular, do not occur in emission processes. Two fundamental sources of non-Markovianity are present: the finite width of the photon wavepacket and the time delay for propagation between the qubit and the end of the semi-infinite waveguide.

  4. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  5. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    Science.gov (United States)

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Angularly-resolved elastic scatter from single particles collected over a large solid angle and with high resolution

    International Nuclear Information System (INIS)

    Aptowicz, Kevin B; Chang, Richard K

    2005-01-01

    Elastic light scattering from a single non-spherical particle of various morphologies has been measured simultaneously with a large angular range (90 deg. < θ < 165 deg. and 0 deg. < φ < 360 deg.) and with high angular resolution (1024 pixels in θ and 512 pixels in φ). Because the single-shot laser pulse is short (pulse duration of 70 ns), the tumbling and flowing particle can be treated as frozen in space. The large angle two-dimensional angular optical scattering (hereafter referred to as LA TAOS) intensity pattern, I(θ,φ), has been measured for a variety of particle morphology, such as the following: (1) single polystyrene latex (PSL) sphere; (2) cluster of PSL spheres; (3) single Bacillus subtilis (BG) spore; (4) cluster of BG spores; (5) dried aggregates of bio-aerosols as well as background clutter aerosols. All these measurements were made using the second harmonic of a Nd:YAG laser (0.532 μm). Islands structures in the LA TAOS patterns seem to be the prominent feature. Efforts are being made to extract metrics from these islands and compare them to theoretical results based on the T-matrix method

  7. Scattering integral equations and four nucleon problem

    International Nuclear Information System (INIS)

    Narodetskii, I.M.

    1980-01-01

    Existing results from the application of integral equation technique to the four-nucleon bound states and scattering are reviewed. The first numerical calculations of the four-body integral equations have been done ten years ago. Yet, it is still widely believed that these equations are too complicated to solve numerically. The purpose of this review is to provide a clear and elementary introduction in the integral equation method and to demonstrate its usefulness in physical applications. The presentation is based on the quasiparticle approach. This permits a simple interpretation of the equations in terms of quasiparticle scattering. The mathematical basis for the quasiparticle approach is the Hilbert-Schmidt method of the Fredholm integral equation theory. The first part of this review contains a detailed discussion of the Hilbert-Schmidt expansion as applied to the 2-particle amplitudes and to the kernel of the four-body equations. The second part contains the discussion of the four-body quasiparticle equations and of the resed forullts obtain bound states and scattering

  8. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  9. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  10. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  11. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  12. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  13. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  14. SCATLAW: a code of scattering law and cross sections calculation for liquids and solids

    International Nuclear Information System (INIS)

    Padureanu, I.; Rapeanu, S.; Rotarascu, G.; Craciun, C.

    1978-11-01

    A code for calculation of the scattering law S(Q,ω), differential and double differential cross sections and scattering kernels in the energy range E(0 - 683 meV) and wave-vector transfer Q(0 - 40 A -1 ) is presented. The code can be used both for solids and liquids which are coherent or incoherent scatterer. For liquids the calculations are based on the most recent theoretical models involving the correlation functions and generalized field approach. The phonon expansion model and the free gas model are also analysed in term of frequency spectra obtained from inelastic neutron scattering using time-of-flight technique. Several results on liquid sodium at T = 233 deg C and on liquid bismuth at T = 286 deg C and T = 402 deg C are presented. (author)

  15. Observations and calculations of two-dimensional angular optical scattering (TAOS) patterns of a single levitated cluster of two and four microspheres

    International Nuclear Information System (INIS)

    Krieger, U.K.; Meier, P.

    2011-01-01

    We use single bi-sphere particles levitated in an electrodynamic balance to record two-dimensional angular scattering patterns at different angles of the coordinate system of the aggregate relative to the incident laser beam. Due to Brownian motion the particle covers the whole set of possible angles with time and allows to select patterns with high symmetry for analysis. These are qualitatively compared to numerical calculations. A small cluster of four spheres shows complex scattering patterns, comparison with computations suggest a low compactness for these clusters. An experimental procedure is proposed for studying restructuring effects occurring in mixed particles upon evaporation. - Research highlights: → Single levitated bi-sphere particle. → Two-dimensional angular scattering pattern. → Comparison experiment with computations.

  16. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    Science.gov (United States)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  17. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  18. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  19. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  20. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  1. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  2. Ridge Regression and Other Kernels for Genomic Selection with R Package rrBLUP

    Directory of Open Access Journals (Sweden)

    Jeffrey B. Endelman

    2011-11-01

    Full Text Available Many important traits in plant breeding are polygenic and therefore recalcitrant to traditional marker-assisted selection. Genomic selection addresses this complexity by including all markers in the prediction model. A key method for the genomic prediction of breeding values is ridge regression (RR, which is equivalent to best linear unbiased prediction (BLUP when the genetic covariance between lines is proportional to their similarity in genotype space. This additive model can be broadened to include epistatic effects by using other kernels, such as the Gaussian, which represent inner products in a complex feature space. To facilitate the use of RR and nonadditive kernels in plant breeding, a new software package for R called rrBLUP has been developed. At its core is a fast maximum-likelihood algorithm for mixed models with a single variance component besides the residual error, which allows for efficient prediction with unreplicated training data. Use of the rrBLUP software is demonstrated through several examples, including the identification of optimal crosses based on superior progeny value. In cross-validation tests, the prediction accuracy with nonadditive kernels was significantly higher than RR for wheat ( L. grain yield but equivalent for several maize ( L. traits.

  3. Performance and Emission of VCR-CI Engine with palm kernel and eucalyptus blends

    Directory of Open Access Journals (Sweden)

    Srinivas kommana

    2016-09-01

    Full Text Available This study aims at complete replacement of conventional diesel fuel by biodiesel. In order to achieve that, palm kernel oil and eucalyptus oil blend has been chosen. Eucalyptus oil was blended with methyl ester of palm kernel oil in 5%, 10% and 15% by volume. Tests were conducted with diesel fuel and blends on a 4 stroke VCR diesel engine for comparative analysis with 220 bar injection pressure and 19:1 compression ratio. All the test fuels were used in computerized 4 stroke single cylinder variable compression ratio engine at five different loads (0, 6, 12, 18 and 24 N m. Present investigation depicts the improved combustion and reduced emissions for the PKO85% + EuO15% blend when compared to diesel at full load conditions.

  4. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  5. BrachyTPS -Interactive point kernel code package for brachytherapy treatment planning of gynaecological cancers

    International Nuclear Information System (INIS)

    Thilagam, L.; Subbaiah, K.V.

    2008-01-01

    Brachytherapy treatment planning systems (TPS) are always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in Intracavitary brachytherapy (ICBT) applicators. Most of the commercially available brachytherapy TPS softwares estimate the absorbed dose at a point, only taking care of the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. So the doses estimated by them are not much accurate under realistic clinical conditions. In this regard, interactive point kernel rode (BrachyTPS) has been developed to perform independent dose calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. As primary input data, the code takes patients' planning data including the source specifications, dwell positions, dwell times and it computes the doses at reference points by dose point kernel formalisms, with multi-layer shield build-up factors accounting for the contributions from scattered radiation. In addition to performing dose distribution calculations, this code package is capable of displaying an isodose distribution curve into the patient anatomy images. The primary aim of this study is to validate the developed point kernel code integrated with treatment planning systems against the other tools which are available in the market. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, Board of Radiation Isotope and Technology (BRIT) made low dose rate (LDR) applicator, Fletcher Green type LDR applicator and Fletcher Williamson high dose rate (HDR) applicator were studied to test the accuracy of the software

  6. Lattice and Molecular Vibrations in Single Crystal I2 at 77 K by Inelastic Neutron Scattering

    DEFF Research Database (Denmark)

    Smith, H. G.; Nielsen, Mourits; Clark, C. B.

    1975-01-01

    Phonon dispersion curves of single crystal iodine at 77 K have been measured by one-phonon coherent inelastic neutron scattering techniques. The data are analysed in terms of two Buckingham-six intermolecular potentials; one to represent the shortest intermolecular interaction (3.5 Å) and the other...

  7. The single-collision thermalization approximation for application to cold neutron moderation problems

    International Nuclear Information System (INIS)

    Ritenour, R.L.

    1989-01-01

    The single collision thermalization (SCT) approximation models the thermalization process by assuming that neutrons attain a thermalized distribution with only a single collision within the moderating material, independent of the neutron's incident energy. The physical intuition on which this approximation is based is that the salient properties of neutron thermalization are accounted for in the first collision, and the effects of subsequent collisions tend to average out statistically. The independence of the neutron incident and outscattering energy leads to variable separability in the scattering kernel and, thus, significant simplification of the neutron thermalization problem. The approximation also addresses detailed balance and neutron conservation concerns. All of the tests performed on the SCT approximation yielded excellent results. The significance of the SCT approximation is that it greatly simplifies thermalization calculations for CNS design. Preliminary investigations with cases involving strong absorbers also indicates that this approximation may have broader applicability, as in the upgrading of the thermalization codes

  8. Landslide Susceptibility Mapping Based on Particle Swarm Optimization of Multiple Kernel Relevance Vector Machines: Case of a Low Hill Area in Sichuan Province, China

    Directory of Open Access Journals (Sweden)

    Yongliang Lin

    2016-10-01

    Full Text Available In this paper, we propose a multiple kernel relevance vector machine (RVM method based on the adaptive cloud particle swarm optimization (PSO algorithm to map landslide susceptibility in the low hill area of Sichuan Province, China. In the multi-kernel structure, the kernel selection problem can be solved by adjusting the kernel weight, which determines the single kernel contribution of the final kernel mapping. The weights and parameters of the multi-kernel function were optimized using the PSO algorithm. In addition, the convergence speed of the PSO algorithm was increased using cloud theory. To ensure the stability of the prediction model, the result of a five-fold cross-validation method was used as the fitness of the PSO algorithm. To verify the results, receiver operating characteristic curves (ROC and landslide dot density (LDD were used. The results show that the model that used a heterogeneous kernel (a combination of two different kernel functions had a larger area under the ROC curve (0.7616 and a lower prediction error ratio (0.28% than did the other types of kernel models employed in this study. In addition, both the sum of two high susceptibility zone LDDs (6.71/100 km2 and the sum of two low susceptibility zone LDDs (0.82/100 km2 demonstrated that the landslide susceptibility map based on the heterogeneous kernel model was closest to the historical landslide distribution. In conclusion, the results obtained in this study can provide very useful information for disaster prevention and land-use planning in the study area.

  9. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  10. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  11. Scattered-field FDTD and PSTD algorithms with CPML absorbing boundary conditions for light scattering by aerosols

    International Nuclear Information System (INIS)

    Sun, Wenbo; Videen, Gorden; Fu, Qiang; Hu, Yongxiang

    2013-01-01

    As fundamental parameters for polarized-radiative-transfer calculations, the single-scattering phase matrix of irregularly shaped aerosol particles must be accurately modeled. In this study, a scattered-field finite-difference time-domain (FDTD) model and a scattered-field pseudo-spectral time-domain (PSTD) model are developed for light scattering by arbitrarily shaped dielectric aerosols. The convolutional perfectly matched layer (CPML) absorbing boundary condition (ABC) is used to truncate the computational domain. It is found that the PSTD method is generally more accurate than the FDTD in calculation of the single-scattering properties given similar spatial cell sizes. Since the PSTD can use a coarser grid for large particles, it can lower the memory requirement in the calculation. However, the Fourier transformations in the PSTD need significantly more CPU time than simple subtractions in the FDTD, and the fast Fourier transform requires a power of 2 elements in calculations, thus using the PSTD could not significantly reduce the CPU time required in the numerical modeling. Furthermore, because the scattered-field FDTD/PSTD equations include incident-wave source terms, the FDTD/PSTD model allows for the inclusion of an arbitrarily incident wave source, including a plane parallel wave or a Gaussian beam like those emitted by lasers usually used in laboratory particle characterizations, etc. The scattered-field FDTD and PSTD light-scattering models can be used to calculate single-scattering properties of arbitrarily shaped aerosol particles over broad size and wavelength ranges. -- Highlights: • Scattered-field FDTD and PSTD models are developed for light scattering by aerosols. • Convolutional perfectly matched layer absorbing boundary condition is used. • PSTD is generally more accurate than FDTD in calculating single-scattering properties. • Using same spatial resolution, PSTD requires much larger CPU time than FDTD

  12. A new method by steering kernel-based Richardson–Lucy algorithm for neutron imaging restoration

    International Nuclear Information System (INIS)

    Qiao, Shuang; Wang, Qiao; Sun, Jia-ning; Huang, Ji-peng

    2014-01-01

    Motivated by industrial applications, neutron radiography has become a powerful tool for non-destructive investigation techniques. However, resulted from a combined effect of neutron flux, collimated beam, limited spatial resolution of detector and scattering, etc., the images made with neutrons are degraded severely by blur and noise. For dealing with it, by integrating steering kernel regression into Richardson–Lucy approach, we present a novel restoration method in this paper, which is capable of suppressing noise while restoring details of the blurred imaging result efficiently. Experimental results show that compared with the other methods, the proposed method can improve the restoration quality both visually and quantitatively

  13. Calculating the reduced scattering coefficient of turbid media from a single optical reflectance signal

    Science.gov (United States)

    Johns, Maureen; Liu, Hanli

    2003-07-01

    When light interacts with tissue, it can be absorbed, scattered or reflected. Such quantitative information can be used to characterize the optical properties of tissue, differentiate tissue types in vivo, and identify normal versus diseased tissue. The purpose of this research is to develop an algorithm that determines the reduced scattering coefficient (μs") of tissues from a single optical reflectance spectrum with a small source-detector separation. The basic relationship between μs" and optical reflectance was developed using Monte Carlo simulations. This produced an analytical equation containing μs" as a function of reflectance. To experimentally validate this relationship, a 1.3-mm diameter fiber optic probe containing two 400-micron diameter fibers was used to deliver light to and collect light from Intralipid solutions of various concentrations. Simultaneous measurements from optical reflectance and an ISS oximeter were performed to validate the calculated μs" values determined by the reflectance measurement against the 'gold standard" ISS readings. The calculated μs" values deviate from the expected values by approximately -/+ 5% with Intralipid concentrations between 0.5 - 2.5%. The scattering properties within this concentration range are similar to those of in vivo tissues. Additional calculations are performed to determine the scattering properties of rat brain tissues and to discuss accuracy of the algorithm for measured samples with a broad range of the absorption coefficient (μa).

  14. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  15. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  16. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  17. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov (United States)

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  18. Scattering by two spheres: Theory and experiment

    DEFF Research Database (Denmark)

    Bjørnø, Irina; Jensen, Leif Bjørnø

    1998-01-01

    of suspended sediments. The scattering properties of single regular-shaped particles have been studied in depth by several authors in the past. However, single particle scattering cannot explain all features of scattering by suspended sediment. When the concentration of particles exceeds a certain limit...... on three issues: (1) to develop a simplified theory for scattering by two elastical spheres; (2) to measure the scattering by two spheres in a water tank, and (3) to compare the theoretical/numerical results with the measured data. A number of factors influencing multiple scattering, including...

  19. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  20. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  1. Coherent anti-Stokes Raman scattering microscopy of single nanodiamonds

    Science.gov (United States)

    Pope, Iestyn; Payne, Lukas; Zoriniants, George; Thomas, Evan; Williams, Oliver; Watson, Peter; Langbein, Wolfgang; Borri, Paola

    2014-11-01

    Nanoparticles have attracted enormous attention for biomedical applications as optical labels, drug-delivery vehicles and contrast agents in vivo. In the quest for superior photostability and biocompatibility, nanodiamonds are considered one of the best choices due to their unique structural, chemical, mechanical and optical properties. So far, mainly fluorescent nanodiamonds have been utilized for cell imaging. However, their use is limited by the efficiency and costs in reliably producing fluorescent defect centres with stable optical properties. Here, we show that single non-fluorescing nanodiamonds exhibit strong coherent anti-Stokes Raman scattering (CARS) at the sp3 vibrational resonance of diamond. Using correlative light and electron microscopy, the relationship between CARS signal strength and nanodiamond size is quantified. The calibrated CARS signal in turn enables the analysis of the number and size of nanodiamonds internalized in living cells in situ, which opens the exciting prospect of following complex cellular trafficking pathways quantitatively.

  2. Coherent anti-Stokes Raman scattering microscopy of single nanodiamonds.

    Science.gov (United States)

    Pope, Iestyn; Payne, Lukas; Zoriniants, George; Thomas, Evan; Williams, Oliver; Watson, Peter; Langbein, Wolfgang; Borri, Paola

    2014-11-01

    Nanoparticles have attracted enormous attention for biomedical applications as optical labels, drug-delivery vehicles and contrast agents in vivo. In the quest for superior photostability and biocompatibility, nanodiamonds are considered one of the best choices due to their unique structural, chemical, mechanical and optical properties. So far, mainly fluorescent nanodiamonds have been utilized for cell imaging. However, their use is limited by the efficiency and costs in reliably producing fluorescent defect centres with stable optical properties. Here, we show that single non-fluorescing nanodiamonds exhibit strong coherent anti-Stokes Raman scattering (CARS) at the sp(3) vibrational resonance of diamond. Using correlative light and electron microscopy, the relationship between CARS signal strength and nanodiamond size is quantified. The calibrated CARS signal in turn enables the analysis of the number and size of nanodiamonds internalized in living cells in situ, which opens the exciting prospect of following complex cellular trafficking pathways quantitatively.

  3. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  4. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  5. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  6. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee

    2011-01-01

    The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application

  8. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  9. A Fourier-series-based kernel-independent fast multipole method

    International Nuclear Information System (INIS)

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-01-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  10. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  11. The dipole form of the gluon part of the BFKL kernel

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Grabovsky, A.V.; Papa, A.

    2007-01-01

    The dipole form of the gluon part of the color singlet BFKL kernel in the next-to-leading order (NLO) is obtained in the coordinate representation by direct transfer from the momentum representation, where the kernel was calculated before. With this paper the transformation of the NLO BFKL kernel to the dipole form, started a few months ago with the quark part of the kernel, is completed

  12. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  13. Time-domain single-source integral equations for analyzing scattering from homogeneous penetrable objects

    KAUST Repository

    Valdés, Felipe

    2013-03-01

    Single-source time-domain electric-and magnetic-field integral equations for analyzing scattering from homogeneous penetrable objects are presented. Their temporal discretization is effected by using shifted piecewise polynomial temporal basis functions and a collocation testing procedure, thus allowing for a marching-on-in-time (MOT) solution scheme. Unlike dual-source formulations, single-source equations involve space-time domain operator products, for which spatial discretization techniques developed for standalone operators do not apply. Here, the spatial discretization of the single-source time-domain integral equations is achieved by using the high-order divergence-conforming basis functions developed by Graglia alongside the high-order divergence-and quasi curl-conforming (DQCC) basis functions of Valdés The combination of these two sets allows for a well-conditioned mapping from div-to curl-conforming function spaces that fully respects the space-mapping properties of the space-time operators involved. Numerical results corroborate the fact that the proposed procedure guarantees accuracy and stability of the MOT scheme. © 2012 IEEE.

  14. A new discrete dipole kernel for quantitative susceptibility mapping.

    Science.gov (United States)

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    Directory of Open Access Journals (Sweden)

    Zhengbin Liu

    2016-08-01

    Full Text Available Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis. In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits.

  16. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  17. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    International Nuclear Information System (INIS)

    Khazaee, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine

  18. Comparison of electron dose-point kernels in water generated by the Monte Carlo codes, PENELOPE, GEANT4, MCNPX, and ETRAN.

    Science.gov (United States)

    Uusijärvi, Helena; Chouin, Nicolas; Bernhardt, Peter; Ferrer, Ludovic; Bardiès, Manuel; Forssell-Aronsson, Eva

    2009-08-01

    Point kernels describe the energy deposited at a certain distance from an isotropic point source and are useful for nuclear medicine dosimetry. They can be used for absorbed-dose calculations for sources of various shapes and are also a useful tool when comparing different Monte Carlo (MC) codes. The aim of this study was to compare point kernels calculated by using the mixed MC code, PENELOPE (v. 2006), with point kernels calculated by using the condensed-history MC codes, ETRAN, GEANT4 (v. 8.2), and MCNPX (v. 2.5.0). Point kernels for electrons with initial energies of 10, 100, 500, and 1 MeV were simulated with PENELOPE. Spherical shells were placed around an isotropic point source at distances from 0 to 1.2 times the continuous-slowing-down-approximation range (R(CSDA)). Detailed (event-by-event) simulations were performed for electrons with initial energies of less than 1 MeV. For 1-MeV electrons, multiple scattering was included for energy losses less than 10 keV. Energy losses greater than 10 keV were simulated in a detailed way. The point kernels generated were used to calculate cellular S-values for monoenergetic electron sources. The point kernels obtained by using PENELOPE and ETRAN were also used to calculate cellular S-values for the high-energy beta-emitter, 90Y, the medium-energy beta-emitter, 177Lu, and the low-energy electron emitter, 103mRh. These S-values were also compared with the Medical Internal Radiation Dose (MIRD) cellular S-values. The greatest differences between the point kernels (mean difference calculated for distances, electrons was 1.4%, 2.5%, and 6.9% for ETRAN, GEANT4, and MCNPX, respectively, compared to PENELOPE, if omitting the S-values when the activity was distributed on the cell surface for 10-keV electrons. The largest difference between the cellular S-values for the radionuclides, between PENELOPE and ETRAN, was seen for 177Lu (1.2%). There were large differences between the MIRD cellular S-values and those obtained from

  19. Scientific opinion on the acute health risks related to the presence of cyanogenic glycosides in raw apricot kernels and products derived from raw apricot kernels

    DEFF Research Database (Denmark)

    Petersen, Annette

    of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...

  20. A portable high-field pulsed-magnet system for single-crystal x-ray scattering studies

    International Nuclear Information System (INIS)

    Islam, Zahirul; Lang, Jonathan C.; Ruff, Jacob P. C.; Ross, Kathryn A.; Gaulin, Bruce D.; Nojiri, Hiroyuki; Matsuda, Yasuhiro H.; Qu Zhe

    2009-01-01

    We present a portable pulsed-magnet system for x-ray studies of materials in high magnetic fields (up to 30 T). The apparatus consists of a split-pair of minicoils cooled on a closed-cycle cryostat, which is used for x-ray diffraction studies with applied field normal to the scattering plane. A second independent closed-cycle cryostat is used for cooling the sample to near liquid helium temperatures. Pulsed magnetic fields (∼1 ms in total duration) are generated by discharging a configurable capacitor bank into the magnet coils. Time-resolved scattering data are collected using a combination of a fast single-photon counting detector, a multichannel scaler, and a high-resolution digital storage oscilloscope. The capabilities of this instrument are used to study a geometrically frustrated system revealing strong magnetostrictive effects in the spin-liquid state.

  1. Scattering of polarized electrons from polarized targets: Coincidence reactions and prescriptions for polarized half-off-shell single-nucleon cross sections

    International Nuclear Information System (INIS)

    Caballero, J.A.; Massachusetts Inst. of Tech., Cambridge, MA; Donnelly, T.W.; Massachusetts Inst. of Tech., Cambridge, MA; Poulis, G.I.; Massachusetts Inst. of Tech., Cambridge, MA

    1993-01-01

    Coincidence reactions of the type vector A( vector e, e'N)B involving the scattering of polarized electrons from polarized targets are discussed within the context of the plane-wave impulse approximation. Prescriptions are developed for polarized half-off single-nucleon cross sections; the different prescriptions are compared for typical quasi-free kinematics. Illustrative results are presented for coincidence polarized electron scattering from typical polarized nuclei. (orig.)

  2. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    Science.gov (United States)

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  3. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  4. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  5. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  6. Efficiently GPU-accelerating long kernel convolutions in 3-D DIRECT TOF PET reconstruction via memory cache optimization

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Sungsoo; Mueller, Klaus [Stony Brook Univ., NY (United States). Center for Visual Computing; Matej, Samuel [Pennsylvania Univ., Philadelphia, PA (United States). Dept. of Radiology

    2011-07-01

    The DIRECT represents a novel approach for 3-D Time-of-Flight (TOF) PET reconstruction. Its novelty stems from the fact that it performs all iterative predictor-corrector operations directly in image space. The projection operations now amount to convolutions in image space, using long TOF (resolution) kernels. While for spatially invariant kernels the computational complexity can be algorithmically overcome by replacing spatial convolution with multiplication in Fourier space, spatially variant kernels cannot use this shortcut. Therefore in this paper, we describe a GPU-accelerated approach for this task. However, the intricate parallel architecture of GPUs poses its own challenges, and careful memory and thread management is the key to obtaining optimal results. As convolution is mainly memory-bound we focus on the former, proposing two types of memory caching schemes that warrant best cache memory re-use by the parallel threads. In contrast to our previous two-stage algorithm, the schemes presented here are both single-stage which is more accurate. (orig.)

  7. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  8. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  9. Prediction of protein subcellular localization using support vector machine with the choice of proper kernel

    Directory of Open Access Journals (Sweden)

    Al Mehedi Hasan

    2017-07-01

    subcellular localization prediction to find out which kernel is the best for SVM. We have evaluated our system on a combined dataset containing 5447 single-localized proteins (originally published as part of the Höglund dataset and 3056 multi-localized proteins (originally published as part of the DBMLoc set. This dataset was used by Briesemeister et al. in their extensive comparison of multilocalization prediction system. The experimental results indicate that the system based on SVM with the Laplace kernel, termed LKLoc, not only achieves a higher accuracy than the system using other kernels but also shows significantly better results than those obtained from other top systems (MDLoc, BNCs, YLoc+. The source code of this prediction system is available upon request.

  10. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  11. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Analysis and Implementation of Particle-to-Particle (P2P) Graphics Processor Unit (GPU) Kernel for Black-Box Adaptive Fast Multipole Method

    Science.gov (United States)

    2015-06-01

    implementation of the direct interaction called particle-to-particle kernel for a shared-memory single GPU device using the Compute Unified Device Architecture ...GPU-defined P2P kernel we developed using the Compute Unified Device Architecture (CUDA).9 A brief outline of the rest of this work follows. The...Employed The computing environment used for this work is a 64-node heterogeneous cluster consisting of 48 IBM dx360M4 nodes, each with one Intel Phi

  13. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  14. Validation of a dose-point kernel convolution technique for internal dosimetry

    International Nuclear Information System (INIS)

    Giap, H.B.; Macey, D.J.; Bayouth, J.E.; Boyer, A.L.

    1995-01-01

    The objective of this study was to validate a dose-point kernel convolution technique that provides a three-dimensional (3D) distribution of absorbed dose from a 3D distribution of the radionuclide 131 I. A dose-point kernel for the penetrating radiations was calculated by a Monte Carlo simulation and cast in a 3D rectangular matrix. This matrix was convolved with the 3D activity map furnished by quantitative single-photon-emission computed tomography (SPECT) to provide a 3D distribution of absorbed dose. The convolution calculation was performed using a 3D fast Fourier transform (FFT) technique, which takes less than 40 s for a 128 x 128 x 16 matrix on an Intel 486 DX2 (66 MHz) personal computer. The calculated photon absorbed dose was compared with values measured by thermoluminescent dosimeters (TLDS) inserted along the diameter of a 22 cm diameter annular source of 131 I. The mean and standard deviation of the percentage difference between the measurements and the calculations were equal to -1% and 3.6% respectively. This convolution method was also used to calculate the 3D dose distribution in an Alderson abdominal phantom containing a liver, a spleen, and a spherical tumour volume loaded with various concentrations of 131 I. By averaging the dose calculated throughout the liver, spleen, and tumour the dose-point kernel approach was compared with values derived using the MIRD formalism, and found to agree to better than 15%. (author)

  15. Application of the Galerkin's method to the solution of the one-dimensional integral transport equation: generalized collision probabilities taken in account the flux gradient and the linearly anisotropic scattering

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-04-01

    For the one-dimensional geometries, the transport equation with linearly anisotropic scattering can be reduced to a single integral equation; this is a singular-kernel FREDHOLM equation of the second kind. When applying a conventional projective method that of GALERKIN, to the solution of this equation the well-known collision probability algorithm is obtained. Piecewise polynomial expansions are used to represent the flux. In the ANILINE code, the flux is supposed to be linear in plane geometry and parabolic in both cylindrical and spherical geometries. An integral relationship was found between the one-dimensional isotropic and anisotropic kernels; this allows to reduce the new matrix elements (issuing from the anisotropic kernel) to classic collision probabilities of the isotropic scattering equation. For cylindrical and spherical geometries used an approximate representation of the current was used to avoid an additional numerical integration. Reflective boundary conditions were considered; in plane geometry the reflection is supposed specular, for the other geometries the isotropic reflection hypothesis has been adopted. Further, the ANILINE code enables to deal with an incoming isotropic current. Numerous checks were performed in monokinetic theory. Critical radii and albedos were calculated for homogeneous slabs, cylinders and spheres. For heterogeneous media, the thermal utilization factor obtained by this method was compared with the theoretical result based upon a formula by BENOIST. Finally, ANILINE was incorporated into the multigroup APOLLO code, which enabled to analyse the MINERVA experimental reactor in transport theory with 99 groups. The ANILINE method is particularly suited to the treatment of strongly anisotropic media with considerable flux gradients. It is also well adapted to the calculation of reflectors, and in general, to the exact analysis of anisotropic effects in large-sized media [fr

  16. On flame kernel formation and propagation in premixed gases

    Energy Technology Data Exchange (ETDEWEB)

    Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  17. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  18. Double hard scattering without double counting

    Energy Technology Data Exchange (ETDEWEB)

    Diehl, Markus [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Gaunt, Jonathan R. [VU Univ. Amsterdam (Netherlands). NIKHEF Theory Group; Schoenwald, Kay [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany)

    2017-02-15

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  19. Double hard scattering without double counting

    International Nuclear Information System (INIS)

    Diehl, Markus; Gaunt, Jonathan R.

    2017-02-01

    Double parton scattering in proton-proton collisions includes kinematic regions in which two partons inside a proton originate from the perturbative splitting of a single parton. This leads to a double counting problem between single and double hard scattering. We present a solution to this problem, which allows for the definition of double parton distributions as operator matrix elements in a proton, and which can be used at higher orders in perturbation theory. We show how the evaluation of double hard scattering in this framework can provide a rough estimate for the size of the higher-order contributions to single hard scattering that are affected by double counting. In a numeric study, we identify situations in which these higher-order contributions must be explicitly calculated and included if one wants to attain an accuracy at which double hard scattering becomes relevant, and other situations where such contributions may be neglected.

  20. Fast analytical scatter estimation using graphics processing units.

    Science.gov (United States)

    Ingleby, Harry; Lippuner, Jonas; Rickey, Daniel W; Li, Yue; Elbakri, Idris

    2015-01-01

    To develop a fast patient-specific analytical estimator of first-order Compton and Rayleigh scatter in cone-beam computed tomography, implemented using graphics processing units. The authors developed an analytical estimator for first-order Compton and Rayleigh scatter in a cone-beam computed tomography geometry. The estimator was coded using NVIDIA's CUDA environment for execution on an NVIDIA graphics processing unit. Performance of the analytical estimator was validated by comparison with high-count Monte Carlo simulations for two different numerical phantoms. Monoenergetic analytical simulations were compared with monoenergetic and polyenergetic Monte Carlo simulations. Analytical and Monte Carlo scatter estimates were compared both qualitatively, from visual inspection of images and profiles, and quantitatively, using a scaled root-mean-square difference metric. Reconstruction of simulated cone-beam projection data of an anthropomorphic breast phantom illustrated the potential of this method as a component of a scatter correction algorithm. The monoenergetic analytical and Monte Carlo scatter estimates showed very good agreement. The monoenergetic analytical estimates showed good agreement for Compton single scatter and reasonable agreement for Rayleigh single scatter when compared with polyenergetic Monte Carlo estimates. For a voxelized phantom with dimensions 128 × 128 × 128 voxels and a detector with 256 × 256 pixels, the analytical estimator required 669 seconds for a single projection, using a single NVIDIA 9800 GX2 video card. Accounting for first order scatter in cone-beam image reconstruction improves the contrast to noise ratio of the reconstructed images. The analytical scatter estimator, implemented using graphics processing units, provides rapid and accurate estimates of single scatter and with further acceleration and a method to account for multiple scatter may be useful for practical scatter correction schemes.

  1. Multi-scattering inversion for low model wavenumbers

    KAUST Repository

    Alkhalifah, Tariq Ali

    2015-08-19

    A successful full wavenumber inversion (FWI) implementation updates the low wavenumber model components first for proper wavefield propagation description, and slowly adds the high-wavenumber potentially scattering parts of the model. The low-wavenumber components can be extracted from the transmission parts of the recorded data given by direct arrivals or the transmission parts of the single and double-scattering wave-fields developed from a predicted scatter field. We develop a combined inversion of data modeled from the source and those corresponding to single and double scattering to update both the velocity model and the component of the velocity (perturbation) responsible for the single and double scattering. The combined inversion helps us access most of the potential model wavenumber information that may be embedded in the data. A scattering angle filter is used to divide the gradient of the combined inversion so initially the high wavenumber (low scattering angle) components of the gradient is directed to the perturbation model and the low wavenumber (high scattering angle) components to the velocity model. As our background velocity matures, the scattering angle divide is slowly lowered to allow for more of the higher wavenumbers to contribute the velocity model.

  2. Threshold and maximum power evolution of stimulated Brillouin scattering and Rayleigh backscattering in a single mode fiber segment

    International Nuclear Information System (INIS)

    Sanchez-Lara, R; Alvarez-Chavez, J A; Mendez-Martinez, F; De la Cruz-May, L; Perez-Sanchez, G G

    2015-01-01

    The behavior of stimulated Brillouin scattering (SBS) and Rayleigh backscattering phenomena, which limit the forward transmission power in modern, ultra-long haul optical communication systems such as dense wavelength division multiplexing systems is analyzed via simulation and experimental investigation of threshold and maximum power. Evolution of SBS, Rayleigh scattering and forward powers are experimentally investigated with a 25 km segment of single mode fiber. Also, a simple algorithm to predict the generation of SBS is proposed where two criteria of power thresholds was used for comparison with experimental data. (paper)

  3. Magnetic small-angle scattering of subthermal neutrons by internal stress fields in work-hardened nickel single crystals oriented for multiple glide

    International Nuclear Information System (INIS)

    Vorbrugg, W.; Schaerpf, O.

    1975-01-01

    The small-angle scattering of Ni single crystals with (111) and (100) axis orientation is measured by a photographic method in the work-hardened state after tensile deformation. Parameters are the external magnetic field H parallel to the axis (600 2 ]<=8,8), and the elastic stress tausub(el)(0<=tausub(el)<=tausub(pl)) applied to the deformed crystals during the experiments. The scattering is found to be anisotropic and characteristic for the chosen orientation. The quantitative photometric analysis shows that the parameters mentioned above only influence the intensity but not the distribution of the scattered neutrons. The scattering increases with the elastic stress and decreases with the magnetic field. In particular, in the unloaded state there is a linear relation between the scattered intensity and the plastic shear stress. (author)

  4. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  5. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  6. Optimal kernel shape and bandwidth for atomistic support of continuum stress

    International Nuclear Information System (INIS)

    Ulz, Manfred H; Moran, Sean J

    2013-01-01

    The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)

  7. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  8. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  9. Compactly Supported Basis Functions as Support Vector Kernels for Classification.

    Science.gov (United States)

    Wittek, Peter; Tan, Chew Lim

    2011-10-01

    Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

  10. Particle levitation and laboratory scattering

    International Nuclear Information System (INIS)

    Reid, Jonathan P.

    2009-01-01

    Measurements of light scattering from aerosol particles can provide a non-intrusive in situ method for characterising particle size distributions, composition, refractive index, phase and morphology. When coupled with techniques for isolating single particles, considerable information on the evolution of the properties of a single particle can be gained during changes in environmental conditions or chemical processing. Electrostatic, acoustic and optical techniques have been developed over many decades for capturing and levitating single particles. In this review, we will focus on studies of particles in the Mie size regime and consider the complimentarity of electrostatic and optical techniques for levitating particles and elastic and inelastic light scattering methods for characterising particles. In particular, we will review the specific advantages of establishing a single-beam gradient force optical trap (optical tweezers) for manipulating single particles or arrays of particles. Recent developments in characterising the nature of the optical trap, in applying elastic and inelastic light scattering measurements for characterising trapped particles, and in manipulating particles will be considered.

  11. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Science.gov (United States)

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  12. Enhanced light scattering of the forbidden longitudinal optical phonon mode studied by micro-Raman spectroscopy on single InN nanowires

    International Nuclear Information System (INIS)

    Schaefer-Nolte, E O; Stoica, T; Gotschke, T; Limbach, F A; Gruetzmacher, D; Calarco, R; Sutter, E; Sutter, P

    2010-01-01

    In the literature, there are controversies on the interpretation of the appearance in InN Raman spectra of a strong scattering peak in the energy region of the unscreened longitudinal optical (LO) phonons, although a shift caused by the phonon-plasmon interaction is expected for the high conductance observed in this material. Most measurements on light scattering are performed on ensembles of InN nanowires (NWs). However, it is important to investigate the behavior of individual nanowires and here we report on micro-Raman measurements on single nanowires. When changing the polarization direction of the incident light from parallel to perpendicular to the wire, the expected reduction of the Raman scattering was observed for transversal optical (TO) and E 2 phonon scattering modes, while a strong symmetry-forbidden LO mode was observed independently on the laser polarization direction. Single Mg- and Si-doped crystalline InN nanowires were also investigated. Magnesium doping results in a sharpening of the Raman peaks, while silicon doping leads to an asymmetric broadening of the LO peak. The results can be explained based on the influence of the high electron concentration with a strong contribution of the surface accumulation layer and the associated internal electric field.

  13. Enhanced Light Scattering of the Forbidden longitudinal Optical Phonon Mode Studied by Micro-Raman Spectroscopy on Single InN nanowires

    International Nuclear Information System (INIS)

    Sutter, E.; Schafer-Nolte, E.O.; Stoica, T.; Gotschke, T.; Limbach, F.A.; Sutter, P.; Grutzmacher, D.; Calarco, R.

    2010-01-01

    In the literature, there are controversies on the interpretation of the appearance in InN Raman spectra of a strong scattering peak in the energy region of the unscreened longitudinal optical (LO) phonons, although a shift caused by the phonon-plasmon interaction is expected for the high conductance observed in this material. Most measurements on light scattering are performed on ensembles of InN nanowires (NWs). However, it is important to investigate the behavior of individual nanowires and here we report on micro-Raman measurements on single nanowires. When changing the polarization direction of the incident light from parallel to perpendicular to the wire, the expected reduction of the Raman scattering was observed for transversal optical (TO) and E2 phonon scattering modes, while a strong symmetry-forbidden LO mode was observed independently on the laser polarization direction. Single Mg- and Si-doped crystalline InN nanowires were also investigated. Magnesium doping results in a sharpening of the Raman peaks, while silicon doping leads to an asymmetric broadening of the LO peak. The results can be explained based on the influence of the high electron concentration with a strong contribution of the surface accumulation layer and the associated internal electric field.

  14. Enhanced light scattering of the forbidden longitudinal optical phonon mode studied by micro-Raman spectroscopy on single InN nanowires.

    Science.gov (United States)

    Schäfer-Nolte, E O; Stoica, T; Gotschke, T; Limbach, F A; Sutter, E; Sutter, P; Grützmacher, D; Calarco, R

    2010-08-06

    In the literature, there are controversies on the interpretation of the appearance in InN Raman spectra of a strong scattering peak in the energy region of the unscreened longitudinal optical (LO) phonons, although a shift caused by the phonon-plasmon interaction is expected for the high conductance observed in this material. Most measurements on light scattering are performed on ensembles of InN nanowires (NWs). However, it is important to investigate the behavior of individual nanowires and here we report on micro-Raman measurements on single nanowires. When changing the polarization direction of the incident light from parallel to perpendicular to the wire, the expected reduction of the Raman scattering was observed for transversal optical (TO) and E(2) phonon scattering modes, while a strong symmetry-forbidden LO mode was observed independently on the laser polarization direction. Single Mg- and Si-doped crystalline InN nanowires were also investigated. Magnesium doping results in a sharpening of the Raman peaks, while silicon doping leads to an asymmetric broadening of the LO peak. The results can be explained based on the influence of the high electron concentration with a strong contribution of the surface accumulation layer and the associated internal electric field.

  15. Tailoring surface plasmon resonance and dipole cavity plasmon modes of scattering cross section spectra on the single solid-gold/gold-shell nanorod

    International Nuclear Information System (INIS)

    Chou Chau, Yuan-Fong; Lim, Chee Ming; Kumara, N. T. R. N.; Yoong, Voo Nyuk; Lee, Chuanyo; Huang, Hung Ji; Lin, Chun-Ting; Chiang, Hai-Pang

    2016-01-01

    Tunable surface plasmon resonance (SPR) and dipole cavity plasmon modes of the scattering cross section (SCS) spectra on the single solid-gold/gold-shell nanorod have been numerically investigated by using the finite element method. Various effects, such as the influence of SCS spectra under x- and y-polarizations on the surface of the single solid-gold/gold-shell nanorod, are discussed in detail. With the single gold-shell nanorod, one can independently tune the relative SCS spectrum width by controlling the rod length and rod diameter, and the surface scattering by varying the shell thickness and polarization direction, as well as the dipole peak energy. These behaviors are consistent with the properties of localized SPRs and offer a way to optically control and produce selected emission wavelengths from the single solid-gold/gold-shell nanorod. The electric field and magnetic distributions provide us a qualitative idea of the geometrical properties of the single solid-gold/gold-shell nanorod on plasmon resonance.

  16. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    Science.gov (United States)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  17. Generalized synthetic kernel approximation for elastic moderation of fast neutrons

    International Nuclear Information System (INIS)

    Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.

    1975-01-01

    A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)

  18. Rapid, green synthesis and surface-enhanced Raman scattering effect of single-crystal silver nanocubes

    Science.gov (United States)

    Mao, Aiqin; Jin, Xia; Gu, Xiaolong; Wei, Xiaoqing; Yang, Guojing

    2012-08-01

    Single-crystal silver (Ag) nanocubes have been synthesized by a rapid and green method at room temperature by adding sodium hydroxide solution to the mixed solutions of silver nitrate, glucose and polyvinylpyrrolidone (PVP). The X-ray diffraction (XRD), ultraviolet-visible (UV-visible) and transmission electron microscopy (TEM) were used to characterize the phase composition and morphology. The results showed that the as-prepared particles were single-crystal Ag nanocubes with edge lengths of around 77 nm and a growing direction along {1 0 0} facets. As substrates for surface-enhanced Raman scattering (SERS) experiment on crystal violet (CV), the SERS enhancement factor of the as-prepared Ag nanocubes were measured to be 5.5 × 104, indicating potential applications in chemical and biological analysis.

  19. Estimation of Single-Crystal Elastic Constants of Polycrystalline Materials from Back-Scattered Grain Noise

    International Nuclear Information System (INIS)

    Haldipur, P.; Margetan, F. J.; Thompson, R. B.

    2006-01-01

    Single-crystal elastic stiffness constants are important input parameters for many calculations in material science. There are well established methods to measure these constants using single-crystal specimens, but such specimens are not always readily available. The ultrasonic properties of metal polycrystals, such as velocity, attenuation, and backscattered grain noise characteristics, depend in part on the single-crystal elastic constants. In this work we consider the estimation of elastic constants from UT measurements and grain-sizing data. We confine ourselves to a class of particularly simple polycrystalline microstructures, found in some jet-engine Nickel alloys, which are single-phase, cubic, equiaxed, and untextured. In past work we described a method to estimate the single-crystal elastic constants from measured ultrasonic velocity and attenuation data accompanied by metallographic analysis of grain size. However, that methodology assumes that all attenuation is due to grain scattering, and thus is not valid if appreciable absorption is present. In this work we describe an alternative approach which uses backscattered grain noise data in place of attenuation data. Efforts to validate the method using a pure copper specimen are discussed, and new results for two jet-engine Nickel alloys are presented

  20. Effect of Palm Kernel Cake Replacement and Enzyme ...

    African Journals Online (AJOL)

    A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...