WorldWideScience

Sample records for isotropic scattering kernel

  1. Biasing anisotropic scattering kernels for deep-penetration Monte Carlo calculations

    International Nuclear Information System (INIS)

    Carter, L.L.; Hendricks, J.S.

    1983-01-01

    The exponential transform is often used to improve the efficiency of deep-penetration Monte Carlo calculations. This technique is usually implemented by biasing the distance-to-collision kernel of the transport equation, but leaving the scattering kernel unchanged. Dwivedi obtained significant improvements in efficiency by biasing an isotropic scattering kernel as well as the distance-to-collision kernel. This idea is extended to anisotropic scattering, particularly the highly forward Klein-Nishina scattering of gamma rays

  2. Point kernels and superposition methods for scatter dose calculations in brachytherapy

    International Nuclear Information System (INIS)

    Carlsson, A.K.

    2000-01-01

    Point kernels have been generated and applied for calculation of scatter dose distributions around monoenergetic point sources for photon energies ranging from 28 to 662 keV. Three different approaches for dose calculations have been compared: a single-kernel superposition method, a single-kernel superposition method where the point kernels are approximated as isotropic and a novel 'successive-scattering' superposition method for improved modelling of the dose from multiply scattered photons. An extended version of the EGS4 Monte Carlo code was used for generating the kernels and for benchmarking the absorbed dose distributions calculated with the superposition methods. It is shown that dose calculation by superposition at and below 100 keV can be simplified by using isotropic point kernels. Compared to the assumption of full in-scattering made by algorithms currently in clinical use, the single-kernel superposition method improves dose calculations in a half-phantom consisting of air and water. Further improvements are obtained using the successive-scattering superposition method, which reduces the overestimates of dose close to the phantom surface usually associated with kernel superposition methods at brachytherapy photon energies. It is also shown that scatter dose point kernels can be parametrized to biexponential functions, making them suitable for use with an effective implementation of the collapsed cone superposition algorithm. (author)

  3. Method for calculating anisotropic neutron transport using scattering kernel without polynomial expansion

    International Nuclear Information System (INIS)

    Takahashi, Akito; Yamamoto, Junji; Ebisuya, Mituo; Sumita, Kenji

    1979-01-01

    A new method for calculating the anisotropic neutron transport is proposed for the angular spectral analysis of D-T fusion reactor neutronics. The method is based on the transport equation with new type of anisotropic scattering kernels formulated by a single function I sub(i) (μ', μ) instead of polynomial expansion, for instance, Legendre polynomials. In the calculation of angular flux spectra by using scattering kernels with the Legendre polynomial expansion, we often observe the oscillation with negative flux. But in principle this oscillation disappears by this new method. In this work, we discussed anisotropic scattering kernels of the elastic scattering and the inelastic scatterings which excite discrete energy levels. The other scatterings were included in isotropic scattering kernels. An approximation method, with use of the first collision source written by the I sub(i) (μ', μ) function, was introduced to attenuate the ''oscillations'' when we are obliged to use the scattering kernels with the Legendre polynomial expansion. Calculated results with this approximation showed remarkable improvement for the analysis of the angular flux spectra in a slab system of lithium metal with the D-T neutron source. (author)

  4. SCAP-82, Single Scattering, Albedo Scattering, Point-Kernel Analysis in Complex Geometry

    International Nuclear Information System (INIS)

    Disney, R.K.; Vogtman, S.E.

    1987-01-01

    1 - Description of problem or function: SCAP solves for radiation transport in complex geometries using the single or albedo scatter point kernel method. The program is designed to calculate the neutron or gamma ray radiation level at detector points located within or outside a complex radiation scatter source geometry or a user specified discrete scattering volume. Geometry is describable by zones bounded by intersecting quadratic surfaces within an arbitrary maximum number of boundary surfaces per zone. Anisotropic point sources are describable as pointwise energy dependent distributions of polar angles on a meridian; isotropic point sources may also be specified. The attenuation function for gamma rays is an exponential function on the primary source leg and the scatter leg with a build- up factor approximation to account for multiple scatter on the scat- ter leg. The neutron attenuation function is an exponential function using neutron removal cross sections on the primary source leg and scatter leg. Line or volumetric sources can be represented as a distribution of isotropic point sources, with un-collided line-of-sight attenuation and buildup calculated between each source point and the detector point. 2 - Method of solution: A point kernel method using an anisotropic or isotropic point source representation is used, line-of-sight material attenuation and inverse square spatial attenuation between the source point and scatter points and the scatter points and detector point is employed. A direct summation of individual point source results is obtained. 3 - Restrictions on the complexity of the problem: - The SCAP program is written in complete flexible dimensioning so that no restrictions are imposed on the number of energy groups or geometric zones. The geometric zone description is restricted to zones defined by boundary surfaces defined by the general quadratic equation or one of its degenerate forms. The only restriction in the program is that the total

  5. Kernel integration scatter model for parallel beam gamma camera and SPECT point source response

    International Nuclear Information System (INIS)

    Marinkovic, P.M.

    2001-01-01

    Scatter correction is a prerequisite for quantitative single photon emission computed tomography (SPECT). In this paper a kernel integration scatter Scatter correction is a prerequisite for quantitative SPECT. In this paper a kernel integration scatter model for parallel beam gamma camera and SPECT point source response based on Klein-Nishina formula is proposed. This method models primary photon distribution as well as first Compton scattering. It also includes a correction for multiple scattering by applying a point isotropic single medium buildup factor for the path segment between the point of scatter an the point of detection. Gamma ray attenuation in the object of imaging, based on known μ-map distribution, is considered too. Intrinsic spatial resolution of the camera is approximated by a simple Gaussian function. Collimator is modeled simply using acceptance angles derived from the physical dimensions of the collimator. Any gamma rays satisfying this angle were passed through the collimator to the crystal. Septal penetration and scatter in the collimator were not included in the model. The method was validated by comparison with Monte Carlo MCNP-4a numerical phantom simulation and excellent results were obtained. The physical phantom experiments, to confirm this method, are planed to be done. (author)

  6. Ideal gas scattering kernel for energy dependent cross-sections

    International Nuclear Information System (INIS)

    Rothenstein, W.; Dagan, R.

    1998-01-01

    A third, and final, paper on the calculation of the joint kernel for neutron scattering by an ideal gas in thermal agitation is presented, when the scattering cross-section is energy dependent. The kernel is a function of the neutron energy after scattering, and of the cosine of the scattering angle, as in the case of the ideal gas kernel for a constant bound atom scattering cross-section. The final expression is suitable for numerical calculations

  7. Ideal Gas Resonance Scattering Kernel Routine for the NJOY Code

    International Nuclear Information System (INIS)

    Rothenstein, W.

    1999-01-01

    In a recent publication an expression for the temperature-dependent double-differential ideal gas scattering kernel is derived for the case of scattering cross sections that are energy dependent. Some tabulations and graphical representations of the characteristics of these kernels are presented in Ref. 2. They demonstrate the increased probability that neutron scattering by a heavy nuclide near one of its pronounced resonances will bring the neutron energy nearer to the resonance peak. This enhances upscattering, when a neutron with energy just below that of the resonance peak collides with such a nuclide. A routine for using the new kernel has now been introduced into the NJOY code. Here, its principal features are described, followed by comparisons between scattering data obtained by the new kernel, and the standard ideal gas kernel, when such comparisons are meaningful (i.e., for constant values of the scattering cross section a 0 K). The new ideal gas kernel for variable σ s 0 (E) at 0 K leads to the correct Doppler-broadened σ s T (E) at temperature T

  8. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1982-10-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The graphical method also leads to a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  9. Graphical analyses of connected-kernel scattering equations

    International Nuclear Information System (INIS)

    Picklesimer, A.

    1983-01-01

    Simple graphical techniques are employed to obtain a new (simultaneous) derivation of a large class of connected-kernel scattering equations. This class includes the Rosenberg, Bencze-Redish-Sloan, and connected-kernel multiple scattering equations as well as a host of generalizations of these and other equations. The basic result is the application of graphical methods to the derivation of interaction-set equations. This yields a new, simplified form for some members of the class and elucidates the general structural features of the entire class

  10. Calculation of the thermal neutron scattering kernel using the synthetic model. Pt. 2. Zero-order energy transfer kernel

    International Nuclear Information System (INIS)

    Drozdowicz, K.

    1995-01-01

    A comprehensive unified description of the application of Granada's Synthetic Model to the slow-neutron scattering by the molecular systems is continued. Detailed formulae for the zero-order energy transfer kernel are presented basing on the general formalism of the model. An explicit analytical formula for the total scattering cross section as a function of the incident neutron energy is also obtained. Expressions of the free gas model for the zero-order scattering kernel and for total scattering kernel are considered as a sub-case of the Synthetic Model. (author). 10 refs

  11. Study on the scattering law and scattering kernel of hydrogen in zirconium hydride

    International Nuclear Information System (INIS)

    Jiang Xinbiao; Chen Wei; Chen Da; Yin Banghua; Xie Zhongsheng

    1999-01-01

    The nuclear analytical model of calculating scattering law and scattering kernel for the uranium zirconium hybrid reactor is described. In the light of the acoustic and optic model of zirconium hydride, its frequency distribution function f(ω) is given and the scattering law of hydrogen in zirconium hydride is obtained by GASKET. The scattering kernel σ l (E 0 →E) of hydrogen bound in zirconium hydride is provided by the SMP code in the standard WIMS cross section library. Along with this library, WIMS is used to calculate the thermal neutron energy spectrum of fuel cell. The results are satisfied

  12. Analytic scattering kernels for neutron thermalization studies

    International Nuclear Information System (INIS)

    Sears, V.F.

    1990-01-01

    Current plans call for the inclusion of a liquid hydrogen or deuterium cold source in the NRU replacement vessel. This report is part of an ongoing study of neutron thermalization in such a cold source. Here, we develop a simple analytical model for the scattering kernel of monatomic and diatomic liquids. We also present the results of extensive numerical calculations based on this model for liquid hydrogen, liquid deuterium, and mixtures of the two. These calculations demonstrate the dependence of the scattering kernel on the incident and scattered-neutron energies, the behavior near rotational thresholds, the dependence on the centre-of-mass pair correlations, the dependence on the ortho concentration, and the dependence on the deuterium concentration in H 2 /D 2 mixtures. The total scattering cross sections are also calculated and compared with available experimental results

  13. Scattering kernels and cross sections working group

    International Nuclear Information System (INIS)

    Russell, G.; MacFarlane, B.; Brun, T.

    1998-01-01

    Topics addressed by this working group are: (1) immediate needs of the cold-moderator community and how to fill them; (2) synthetic scattering kernels; (3) very simple synthetic scattering functions; (4) measurements of interest; and (5) general issues. Brief summaries are given for each of these topics

  14. Proof of the formula for the ideal gas scattering kernel for nuclides with strongly energy dependent scattering cross sections

    International Nuclear Information System (INIS)

    Rothenstein, W.

    2004-01-01

    The current study is a sequel to a paper by Rothenstein and Dagan [Ann. Nucl. Energy 25 (1998) 209] where the ideal gas based kernel for scatterers with internal structure was introduced. This double differential kernel includes the neutron energy after scattering as well as the cosine of the scattering angle for isotopes with strong scattering resonances. A new mathematical formalism enables the inclusion of the new kernel in NJOY [MacFarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91 (LA-12740-m)]. Moreover the computational time of the new kernel is reduced significantly, feasible for practical application. The completeness of the new kernel is proven mathematically and demonstrated numerically. Modifications necessary to remove the existing inconsistency of the secondary energy distribution in NJOY are presented

  15. Scatter kernel estimation with an edge-spread function method for cone-beam computed tomography imaging

    International Nuclear Information System (INIS)

    Li Heng; Mohan, Radhe; Zhu, X Ronald

    2008-01-01

    The clinical applications of kilovoltage x-ray cone-beam computed tomography (CBCT) have been compromised by the limited quality of CBCT images, which typically is due to a substantial scatter component in the projection data. In this paper, we describe an experimental method of deriving the scatter kernel of a CBCT imaging system. The estimated scatter kernel can be used to remove the scatter component from the CBCT projection images, thus improving the quality of the reconstructed image. The scattered radiation was approximated as depth-dependent, pencil-beam kernels, which were derived using an edge-spread function (ESF) method. The ESF geometry was achieved with a half-beam block created by a 3 mm thick lead sheet placed on a stack of slab solid-water phantoms. Measurements for ten water-equivalent thicknesses (WET) ranging from 0 cm to 41 cm were taken with (half-blocked) and without (unblocked) the lead sheet, and corresponding pencil-beam scatter kernels or point-spread functions (PSFs) were then derived without assuming any empirical trial function. The derived scatter kernels were verified with phantom studies. Scatter correction was then incorporated into the reconstruction process to improve image quality. For a 32 cm diameter cylinder phantom, the flatness of the reconstructed image was improved from 22% to 5%. When the method was applied to CBCT images for patients undergoing image-guided therapy of the pelvis and lung, the variation in selected regions of interest (ROIs) was reduced from >300 HU to <100 HU. We conclude that the scatter reduction technique utilizing the scatter kernel effectively suppresses the artifact caused by scatter in CBCT.

  16. ZZ THERMOS, Multigroup P0 to P5 Thermal Scattering Kernels from ENDF/B Scattering Law Data

    International Nuclear Information System (INIS)

    McCrosson, F.J.; Finch, D.R.

    1975-01-01

    1 - Description of problem or function: Number of groups: 30-group THERMOS thermal scattering kernels. Nuclides: Molecular H 2 O, Molecular D 2 O, Graphite, Polyethylene, Benzene, Zr bound in ZrHx, H bound in ZrHx, Beryllium-9, Beryllium Oxide, Uranium Dioxide. Origin: ENDF/B library. Weighting Spectrum: yes. These data are 30-group THERMOS thermal scattering kernels for P0 to P5 Legendre orders for every temperature of every material from s(alpha,beta) data stored in the ENDF/B library. These scattering kernels were generated using the FLANGE2 computer code (NESC Abstract 368). To test the kernels, the integral properties of each set of kernels were determined by a precision integration of the diffusion length equation and compared to experimental measurements of these properties. In general, the agreement was very good. Details of the methods used and results obtained are contained in the reference. The scattering kernels are organized into a two volume magnetic tape library from which they may be retrieved easily for use in any 30-group THERMOS library. The contents of the tapes are as follows - (Material: ZA/Temperatures (degrees K)): Molecular H 2 O: 100.0/296, 350, 400, 450, 500, 600, Molecular D 2 O: 101.0/296, 350, 400, 450, 500, 600, Graphite: 6000.0/296, 400, 500, 600, 700, 800, Polyethylene: 205.0/296, 350 Benzene: 106.0/296, 350, 400, 450, 500, 600, Zr bound in ZrHx: 203.0/296, 400, 500, 600, 700, 800, H bound in ZrHx: 230.0/296, 400, 500, 600, 700, 800, Beryllium-9: 4009.0/296, 400, 500, 600, 700, 800, Beryllium Oxide: 200.0/296, 400, 500, 600, 700, 800, Uranium Dioxide: 207.0/296, 400, 500, 600, 700, 800 2 - Method of solution: Kernel generation is performed by direct integration of the thermal scattering law data to obtain the differential scattering cross sections for each Legendre order. The integral parameter calculation is done by precision integration of the diffusion length equation for several moderator absorption cross sections followed by a

  17. Development of Cold Neutron Scattering Kernels for Advanced Moderators

    International Nuclear Information System (INIS)

    Granada, J. R.; Cantargi, F.

    2010-01-01

    The development of scattering kernels for a number of molecular systems was performed, including a set of hydrogeneous methylated aromatics such as toluene, mesitylene, and mixtures of those. In order to partially validate those new libraries, we compared predicted total cross sections with experimental data obtained in our laboratory. In addition, we have introduced a new model to describe the interaction of slow neutrons with solid methane in phase II (stable phase below T = 20.4 K, atmospheric pressure). Very recently, a new scattering kernel to describe the interaction of slow neutrons with solid Deuterium was also developed. The main dynamical characteristics of that system are contained in the formalism, the elastic processes involving coherent and incoherent contributions are fully described, as well as the spin-correlation effects.

  18. Reconstruction of atomic effective potentials from isotropic scattering factors

    International Nuclear Information System (INIS)

    Romera, E.; Angulo, J.C.; Torres, J.J.

    2002-01-01

    We present a method for the approximate determination of one-electron effective potentials of many-electron systems from a finite number of values of the isotropic scattering factor. The method is based on the minimum cross-entropy technique. An application to some neutral ground-state atomic systems has been done within a Hartree-Fock framework

  19. The collapsed cone algorithm for (192)Ir dosimetry using phantom-size adaptive multiple-scatter point kernels.

    Science.gov (United States)

    Tedgren, Åsa Carlsson; Plamondon, Mathieu; Beaulieu, Luc

    2015-07-07

    The aim of this work was to investigate how dose distributions calculated with the collapsed cone (CC) algorithm depend on the size of the water phantom used in deriving the point kernel for multiple scatter. A research version of the CC algorithm equipped with a set of selectable point kernels for multiple-scatter dose that had initially been derived in water phantoms of various dimensions was used. The new point kernels were generated using EGSnrc in spherical water phantoms of radii 5 cm, 7.5 cm, 10 cm, 15 cm, 20 cm, 30 cm and 50 cm. Dose distributions derived with CC in water phantoms of different dimensions and in a CT-based clinical breast geometry were compared to Monte Carlo (MC) simulations using the Geant4-based brachytherapy specific MC code Algebra. Agreement with MC within 1% was obtained when the dimensions of the phantom used to derive the multiple-scatter kernel were similar to those of the calculation phantom. Doses are overestimated at phantom edges when kernels are derived in larger phantoms and underestimated when derived in smaller phantoms (by around 2% to 7% depending on distance from source and phantom dimensions). CC agrees well with MC in the high dose region of a breast implant and is superior to TG43 in determining skin doses for all multiple-scatter point kernel sizes. Increased agreement between CC and MC is achieved when the point kernel is comparable to breast dimensions. The investigated approximation in multiple scatter dose depends on the choice of point kernel in relation to phantom size and yields a significant fraction of the total dose only at distances of several centimeters from a source/implant which correspond to volumes of low doses. The current implementation of the CC algorithm utilizes a point kernel derived in a comparatively large (radius 20 cm) water phantom. A fixed point kernel leads to predictable behaviour of the algorithm with the worst case being a source/implant located well within a patient

  20. A Stochastic Proof of the Resonant Scattering Kernel and its Applications for Gen IV Reactors Type

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Broeders, C.H.M.; Lohnert, G.

    2008-01-01

    Monte Carlo codes such as MCNP are widely accepted as almost-reference for reactor analysis. The Monte Carlo Code should therefore use as few as possible approximations in order to produce 'experimental-level' calculations. In this study we deal with one of the most problematic approximations done in MCNP in which the resonances are ignored for the secondary neutron energy distribution, namely the change of the energy and angular direction of the neutron after interaction with a heavy isotope with pronounced resonances. The endeavour of exploiting the influence of the resonances on the scattering kernel goes back to 1944 where E. Wigner and J. Wilkins developed the first temperature dependent scattering kernel. However only in 1998, the full analytical solution for the double differential resonant dependent scattering kernel was suggested by W. Rothenstein and R. Dagan. An independent stochastic approach is presented for the first time to confirm the above analytical kernel with a complete different methodology. Moreover, by manipulating in a subtle manner the scattering subroutine COLIDN of MCNP, it is proven that this very subroutine is, to some extent, inappropriate as well as the relevant explanation in the MCNP manual. The impact of this improved resonance dependent scattering kernel on diverse types of reactors, in particular for the Generation IV innovative core design HTR, is shown to be significant. (authors)

  1. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi

    2013-08-19

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it\\'s kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  2. Finite frequency traveltime sensitivity kernels for acoustic anisotropic media: Angle dependent bananas

    KAUST Repository

    Djebbi, Ramzi; Alkhalifah, Tariq Ali

    2013-01-01

    Anisotropy is an inherent character of the Earth subsurface. It should be considered for modeling and inversion. The acoustic VTI wave equation approximates the wave behavior in anisotropic media, and especially it's kinematic characteristics. To analyze which parts of the model would affect the traveltime for anisotropic traveltime inversion methods, especially for wave equation tomography (WET), we drive the sensitivity kernels for anisotropic media using the VTI acoustic wave equation. A Born scattering approximation is first derived using the Fourier domain acoustic wave equation as a function of perturbations in three anisotropy parameters. Using the instantaneous traveltime, which unwraps the phase, we compute the kernels. These kernels resemble those for isotropic media, with the η kernel directionally dependent. They also have a maximum sensitivity along the geometrical ray, which is more realistic compared to the cross-correlation based kernels. Focusing on diving waves, which is used more often, especially recently in waveform inversion, we show sensitivity kernels in anisotropic media for this case.

  3. Multi-parameter Analysis and Inversion for Anisotropic Media Using the Scattering Integral Method

    KAUST Repository

    Djebbi, Ramzi

    2017-01-01

    the model. I study the prospect of applying a scattering integral approach for multi-parameter inversion for a transversely isotropic model with a vertical axis of symmetry. I mainly analyze the sensitivity kernels to understand the sensitivity of seismic

  4. Multi-parameter Analysis and Inversion for Anisotropic Media Using the Scattering Integral Method

    KAUST Repository

    Djebbi, Ramzi

    2017-10-24

    The main goal in seismic exploration is to identify locations of hydrocarbons reservoirs and give insights on where to drill new wells. Therefore, estimating an Earth model that represents the right physics of the Earth\\'s subsurface is crucial in identifying these targets. Recent seismic data, with long offsets and wide azimuth features, are more sensitive to anisotropy. Accordingly, multiple anisotropic parameters need to be extracted from the recorded data on the surface to properly describe the model. I study the prospect of applying a scattering integral approach for multi-parameter inversion for a transversely isotropic model with a vertical axis of symmetry. I mainly analyze the sensitivity kernels to understand the sensitivity of seismic data to anisotropy parameters. Then, I use a frequency domain scattering integral approach to invert for the optimal parameterization. The scattering integral approach is based on the explicit computation of the sensitivity kernels. I present a new method to compute the traveltime sensitivity kernels for wave equation tomography using the unwrapped phase. I show that the new kernels are a better alternative to conventional cross-correlation/Rytov kernels. I also derive and analyze the sensitivity kernels for a transversely isotropic model with a vertical axis of symmetry. The kernels structure, for various opening/scattering angles, highlights the trade-off regions between the parameters. For a surface recorded data, I show that the normal move-out velocity vn, ƞ and δ parameterization is suitable for a simultaneous inversion of diving waves and reflections. Moreover, when seismic data is inverted hierarchically, the horizontal velocity vh, ƞ and ϵ is the parameterization with the least trade-off. In the frequency domain, the hierarchical inversion approach is naturally implemented using frequency continuation, which makes vh, ƞ and ϵ parameterization attractive. I formulate the multi-parameter inversion using the

  5. The energy-dependent backward-forward-isotropic scattering model with some applications to the neutron transport equation

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    1985-01-01

    A multigroup formalism is developed for the backward-forward-isotropic scattering model of neutron transport. Some exact solutions are obtained in two-group theory for slab and spherical geometry. The results are useful for benchmark problems involving multigroup anisotropic scattering. (author)

  6. Analysis of the traveltime sensitivity kernels for an acoustic transversely isotropic medium with a vertical axis of symmetry

    KAUST Repository

    Djebbi, Ramzi

    2016-02-05

    In anisotropic media, several parameters govern the propagation of the compressional waves. To correctly invert surface recorded seismic data in anisotropic media, a multi-parameter inversion is required. However, a tradeoff between parameters exists because several models can explain the same dataset. To understand these tradeoffs, diffraction/reflection and transmission-type sensitivity-kernels analyses are carried out. Such analyses can help us to choose the appropriate parameterization for inversion. In tomography, the sensitivity kernels represent the effect of a parameter along the wave path between a source and a receiver. At a given illumination angle, similarities between sensitivity kernels highlight the tradeoff between the parameters. To discuss the parameterization choice in the context of finite-frequency tomography, we compute the sensitivity kernels of the instantaneous traveltimes derived from the seismic data traces. We consider the transmission case with no encounter of an interface between a source and a receiver; with surface seismic data, this corresponds to a diving wave path. We also consider the diffraction/reflection case when the wave path is formed by two parts: one from the source to a sub-surface point and the other from the sub-surface point to the receiver. We illustrate the different parameter sensitivities for an acoustic transversely isotropic medium with a vertical axis of symmetry. The sensitivity kernels depend on the parameterization choice. By comparing different parameterizations, we explain why the parameterization with the normal moveout velocity, the anellipitic parameter η, and the δ parameter is attractive when we invert diving and reflected events recorded in an active surface seismic experiment. © 2016 European Association of Geoscientists & Engineers.

  7. Electromagnetic illusion with isotropic and homogeneous materials through scattering manipulation

    International Nuclear Information System (INIS)

    Yang, Fan; Mei, Zhong Lei; Jiang, Wei Xiang; Cui, Tie Jun

    2015-01-01

    A new isotropic and homogeneous illusion device for electromagnetic waves is proposed. This single-shelled device can change the fingerprint of the covered object into another one by manipulating the scattering of the composite structure. We show that an electrically small sphere can be disguised as another small one with different electromagnetic parameters. The device can even make a dielectric sphere (electrically small) behave like a conducting one. Full-wave simulations confirm the performance of proposed illusion device. (paper)

  8. The slab albedo problem for the triplet scattering kernel with modified F{sub N} method

    Energy Technology Data Exchange (ETDEWEB)

    Tuereci, Demet [Ministry of Education, 75th year Anatolia High School, Ankara (Turkey)

    2016-12-15

    One speed, time independent neutron transport equation for a slab geometry with the quadratic anisotropic scattering kernel is considered. The albedo and transmission factor are calculated by the modified F{sub N} method. The obtained numerical results are listed for different scattering coefficients.

  9. Interactively variable isotropic resolution in computed tomography

    International Nuclear Information System (INIS)

    Lapp, Robert M; Kyriakou, Yiannis; Kachelriess, Marc; Wilharm, Sylvia; Kalender, Willi A

    2008-01-01

    An individual balancing between spatial resolution and image noise is necessary to fulfil the diagnostic requirements in medical CT imaging. In order to change influencing parameters, such as reconstruction kernel or effective slice thickness, additional raw-data-dependent image reconstructions have to be performed. Therefore, the noise versus resolution trade-off is time consuming and not interactively applicable. Furthermore, isotropic resolution, expressed by an equivalent point spread function (PSF) in every spatial direction, is important for the undistorted visualization and quantitative evaluation of small structures independent of the viewing plane. Theoretically, isotropic resolution can be obtained by matching the in-plane and through-plane resolution with the aforementioned parameters. Practically, however, the user is not assisted in doing so by current reconstruction systems and therefore isotropic resolution is not commonly achieved, in particular not at the desired resolution level. In this paper, an integrated approach is presented for equalizing the in-plane and through-plane spatial resolution by image filtering. The required filter kernels are calculated from previously measured PSFs in x/y- and z-direction. The concepts derived are combined with a variable resolution filtering technique. Both approaches are independent of CT raw data and operate only on reconstructed images which allows for their application in real time. Thereby, the aim of interactively variable, isotropic resolution is achieved. Results were evaluated quantitatively by measuring PSFs and image noise, and qualitatively by comparing the images to direct reconstructions regarded as the gold standard. Filtered images matched direct reconstructions with arbitrary reconstruction kernels with standard deviations in difference images of typically between 1 and 17 HU. Isotropic resolution was achieved within 5% of the selected resolution level. Processing times of 20-100 ms per frame

  10. Topics in bound-state dynamical processes: semiclassical eigenvalues, reactive scattering kernels and gas-surface scattering models

    International Nuclear Information System (INIS)

    Adams, J.E.

    1979-05-01

    The difficulty of applying the WKB approximation to problems involving arbitrary potentials has been confronted. Recent work has produced a convenient expression for the potential correction term. However, this approach does not yield a unique correction term and hence cannot be used to construct the proper modification. An attempt is made to overcome the uniqueness difficulties by imposing a criterion which permits identification of the correct modification. Sections of this work are: semiclassical eigenvalues for potentials defined on a finite interval; reactive scattering exchange kernels; a unified model for elastic and inelastic scattering from a solid surface; and selective absorption on a solid surface

  11. The suite of analytical benchmarks for neutral particle transport in infinite isotropically scattering media

    International Nuclear Information System (INIS)

    Kornreich, D.E.; Ganapol, B.D.

    1997-01-01

    The linear Boltzmann equation for the transport of neutral particles is investigated with the objective of generating benchmark-quality evaluations of solutions for homogeneous infinite media. In all cases, the problems are stationary, of one energy group, and the scattering is isotropic. The solutions are generally obtained through the use of Fourier transform methods with the numerical inversions constructed from standard numerical techniques such as Gauss-Legendre quadrature, summation of infinite series, and convergence acceleration. Consideration of the suite of benchmarks in infinite homogeneous media begins with the standard one-dimensional problems: an isotropic point source, an isotropic planar source, and an isotropic infinite line source. The physical and mathematical relationships between these source configurations are investigated. The progression of complexity then leads to multidimensional problems with source configurations that also emit particles isotropically: the finite line source, the disk source, and the rectangular source. The scalar flux from the finite isotropic line and disk sources will have a two-dimensional spatial variation, whereas a finite rectangular source will have a three-dimensional variation in the scalar flux. Next, sources emitting particles anisotropically are considered. The most basic such source is the point beam giving rise to the Green's function, which is physically the most fundamental transport problem, yet may be constructed from the isotropic point source solution. Finally, the anisotropic plane and anisotropically emitting infinite line sources are considered. Thus, a firm theoretical and numerical base is established for the most fundamental neutral particle benchmarks in infinite homogeneous media

  12. Slab albedo for linearly and quadratically anisotropic scattering kernel with modified F{sub N} method

    Energy Technology Data Exchange (ETDEWEB)

    Tuereci, R. Goekhan [Kirikkale Univ. (Turkey). Kirikkale Vocational School; Tuereci, D. [Ministry of Education, Ankara (Turkey). 75th year Anatolia High School

    2017-11-15

    One speed, time independent and homogeneous medium neutron transport equation is solved with the anisotropic scattering which includes both the linearly and the quadratically anisotropic scattering kernel. Having written Case's eigenfunctions and the orthogonality relations among of these eigenfunctions, slab albedo problem is investigated as numerically by using Modified F{sub N} method. Selected numerical results are presented in tables.

  13. COMPUTATIONAL EFFICIENCY OF A MODIFIED SCATTERING KERNEL FOR FULL-COUPLED PHOTON-ELECTRON TRANSPORT PARALLEL COMPUTING WITH UNSTRUCTURED TETRAHEDRAL MESHES

    Directory of Open Access Journals (Sweden)

    JONG WOON KIM

    2014-04-01

    In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.

  14. Impact of the Improved Resonance Scattering Kernel on HTR Calculations

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Broeders, C.H.M.; Lohnert, G.

    2008-01-01

    The importance of an advanced neutron scattering model for heavy isotopes with strong energy dependent cross sections such as the pronounced resonances of U 238 has been discussed in various publications where the full double differential scattering kernel was derived. In this study we quantify the effect of the new scattering model for specific innovative types of High Temperature Reactor (HTR) systems which commonly exhibit a higher degree of heterogeneity and higher fuel temperatures, hence increasing the importance of the secondary neutron energy distribution. In particular the impact on the multiplication factor (k ∞ ) and the Doppler reactivity coefficient is presented in view of the packing factors and operating temperatures. A considerable reduction of k ∞ (up to 600 pcm) and an increased Doppler reactivity (up to 10%) is observed. An increase of up to 2.3% of the Pu 239 inventory can be noticed at 90 MWd/tHM burnup due to enhanced neutron absorption of U 238 . Those effects are more pronounced for design cases in which the neutron flux spectrum is hardened towards the resolved resonance range. (authors)

  15. Anisotropic hydrodynamics with a scalar collisional kernel

    Science.gov (United States)

    Almaalol, Dekrayat; Strickland, Michael

    2018-04-01

    Prior studies of nonequilibrium dynamics using anisotropic hydrodynamics have used the relativistic Anderson-Witting scattering kernel or some variant thereof. In this paper, we make the first study of the impact of using a more realistic scattering kernel. For this purpose, we consider a conformal system undergoing transversally homogenous and boost-invariant Bjorken expansion and take the collisional kernel to be given by the leading order 2 ↔2 scattering kernel in scalar λ ϕ4 . We consider both classical and quantum statistics to assess the impact of Bose enhancement on the dynamics. We also determine the anisotropic nonequilibrium attractor of a system subject to this collisional kernel. We find that, when the near-equilibrium relaxation-times in the Anderson-Witting and scalar collisional kernels are matched, the scalar kernel results in a higher degree of momentum-space anisotropy during the system's evolution, given the same initial conditions. Additionally, we find that taking into account Bose enhancement further increases the dynamically generated momentum-space anisotropy.

  16. A Calculation of the Angular Moments of the Kernel for a Monatomic Gas Scatterer

    Energy Technology Data Exchange (ETDEWEB)

    Haakansson, Rune

    1964-07-15

    B. Davison has given in an unpublished paper a method of calculating the moments of the monatomic gas scattering kernel. We present here this method and apply it to calculate the first four moments. Numerical results for these moments for the masses M = 1 and 3.6 are also given.

  17. Proof and implementation of the stochastic formula for ideal gas, energy dependent scattering kernel

    International Nuclear Information System (INIS)

    Becker, B.; Dagan, R.; Lohnert, G.

    2009-01-01

    The ideal gas, scattering kernel for heavy nuclei with pronounced resonances was developed [Rothenstein, W., Dagan, R., 1998. Ann. Nucl. Energy 25, 209-222], proved and implemented [Rothenstein, W., 2004 Ann. Nucl. Energy 31, 9-23] in the data processing code NJOY [Macfarlane, R.E., Muir, D.W., 1994. The NJOY Nuclear Data Processing System Version 91, LA-12740-M] from which the scattering probability tables were prepared [Dagan, R., 2005. Ann. Nucl. Energy 32, 367-377]. Those tables were introduced to the well known MCNP code [X-5 Monte Carlo Team. MCNP - A General Monte Carlo N-Particle Transport Code version 5 LA-UR-03-1987 code] via the 'mt' input cards in the same manner as it is done for light nuclei in the thermal energy range. In this study we present an alternative methodology for solving the double differential energy dependent scattering kernel which is based solely on stochastic consideration as far as the scattering probabilities are concerned. The solution scheme is based on an alternative rejection scheme suggested by Rothenstein [Rothenstein, W. ENS conference 1994 Tel Aviv]. Based on comparison with the above mentioned analytical (probability S(α,β)-tables) approach it is confirmed that the suggested rejection scheme provides accurate results. The uncertainty concerning the magnitude of the bias due to the enhanced multiple rejections during the sampling procedure are proved to lie within 1-2 standard deviations for all practical cases that were analysed.

  18. A scatter model for fast neutron beams using convolution of diffusion kernels

    International Nuclear Information System (INIS)

    Moyers, M.F.; Horton, J.L.; Boyer, A.L.

    1988-01-01

    A new model is proposed to calculate dose distributions in materials irradiated with fast neutron beams. Scattered neutrons are transported away from the point of production within the irradiated material in the forward, lateral and backward directions, while recoil protons are transported in the forward and lateral directions. The calculation of dose distributions, such as for radiotherapy planning, is accomplished by convolving a primary attenuation distribution with a diffusion kernel. The primary attenuation distribution may be quickly calculated for any given set of beam and material conditions as it describes only the magnitude and distribution of first interaction sites. The calculation of energy diffusion kernels is very time consuming but must be calculated only once for a given energy. Energy diffusion distributions shown in this paper have been calculated using a Monte Carlo type of program. To decrease beam calculation time, convolutions are performed using a Fast Fourier Transform technique. (author)

  19. Unified heat kernel regression for diffusion, kernel smoothing and wavelets on manifolds and its application to mandible growth modeling in CT images.

    Science.gov (United States)

    Chung, Moo K; Qiu, Anqi; Seo, Seongho; Vorperian, Houri K

    2015-05-01

    We present a novel kernel regression framework for smoothing scalar surface data using the Laplace-Beltrami eigenfunctions. Starting with the heat kernel constructed from the eigenfunctions, we formulate a new bivariate kernel regression framework as a weighted eigenfunction expansion with the heat kernel as the weights. The new kernel method is mathematically equivalent to isotropic heat diffusion, kernel smoothing and recently popular diffusion wavelets. The numerical implementation is validated on a unit sphere using spherical harmonics. As an illustration, the method is applied to characterize the localized growth pattern of mandible surfaces obtained in CT images between ages 0 and 20 by regressing the length of displacement vectors with respect to a surface template. Copyright © 2015 Elsevier B.V. All rights reserved.

  20. Neutron Transport in Finite Random Media with Pure-Triplet Scattering

    International Nuclear Information System (INIS)

    Sallaha, M.; Hendi, A.A.

    2008-01-01

    The solution of the one-speed neutron transport equation in a finite slab random medium with pure-triplet anisotropic scattering is studied. The stochastic medium is assumed to consist of two randomly mixed immiscible fluids. The cross section and the scattering kernel are treated as discrete random variables, which obey the same statistics as Markovian processes and exponential chord length statistics. The medium boundaries are considered to have specular reflectivities with angular-dependent externally incident flux. The deterministic solution is obtained by using Pomraning-Eddington approximation. Numerical results are calculated for the average reflectivity and average transmissivity for different values of the single scattering albedo and varying the parameters which characterize the random medium. Compared to the results obtained by Adams et al. in case of isotropic scattering that based on the Monte Carlo technique, it can be seen that we have good comparable data

  1. Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures

    Science.gov (United States)

    Cantargi, F.; Granada, J. R.; Damián, J. I. Márquez

    2017-09-01

    Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6), is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K), but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.

  2. Application of the Galerkin's method to the solution of the one-dimensional integral transport equation: generalized collision probabilities taken in account the flux gradient and the linearly anisotropic scattering

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-04-01

    For the one-dimensional geometries, the transport equation with linearly anisotropic scattering can be reduced to a single integral equation; this is a singular-kernel FREDHOLM equation of the second kind. When applying a conventional projective method that of GALERKIN, to the solution of this equation the well-known collision probability algorithm is obtained. Piecewise polynomial expansions are used to represent the flux. In the ANILINE code, the flux is supposed to be linear in plane geometry and parabolic in both cylindrical and spherical geometries. An integral relationship was found between the one-dimensional isotropic and anisotropic kernels; this allows to reduce the new matrix elements (issuing from the anisotropic kernel) to classic collision probabilities of the isotropic scattering equation. For cylindrical and spherical geometries used an approximate representation of the current was used to avoid an additional numerical integration. Reflective boundary conditions were considered; in plane geometry the reflection is supposed specular, for the other geometries the isotropic reflection hypothesis has been adopted. Further, the ANILINE code enables to deal with an incoming isotropic current. Numerous checks were performed in monokinetic theory. Critical radii and albedos were calculated for homogeneous slabs, cylinders and spheres. For heterogeneous media, the thermal utilization factor obtained by this method was compared with the theoretical result based upon a formula by BENOIST. Finally, ANILINE was incorporated into the multigroup APOLLO code, which enabled to analyse the MINERVA experimental reactor in transport theory with 99 groups. The ANILINE method is particularly suited to the treatment of strongly anisotropic media with considerable flux gradients. It is also well adapted to the calculation of reflectors, and in general, to the exact analysis of anisotropic effects in large-sized media [fr

  3. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge; Schuster, Gerard T.

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently

  4. Uranium kernel formation via internal gelation

    International Nuclear Information System (INIS)

    Hunt, R.D.; Collins, J.L.

    2004-01-01

    In the 1970s and 1980s, U.S. Department of Energy (DOE) conducted numerous studies on the fabrication of nuclear fuel particles using the internal gelation process. These amorphous kernels were prone to flaking or breaking when gases tried to escape from the kernels during calcination and sintering. These earlier kernels would not meet today's proposed specifications for reactor fuel. In the interim, the internal gelation process has been used to create hydrous metal oxide microspheres for the treatment of nuclear waste. With the renewed interest in advanced nuclear fuel by the DOE, the lessons learned from the nuclear waste studies were recently applied to the fabrication of uranium kernels, which will become tri-isotropic (TRISO) fuel particles. These process improvements included equipment modifications, small changes to the feed formulations, and a new temperature profile for the calcination and sintering. The modifications to the laboratory-scale equipment and its operation as well as small changes to the feed composition increased the product yield from 60% to 80%-99%. The new kernels were substantially less glassy, and no evidence of flaking was found. Finally, key process parameters were identified, and their effects on the uranium microspheres and kernels are discussed. (orig.)

  5. Preliminary scattering kernels for ethane and triphenylmethane at cryogenic temperatures

    Directory of Open Access Journals (Sweden)

    Cantargi F.

    2017-01-01

    Full Text Available Two potential cold moderator materials were studied: ethane and triphenylmethane. The first one, ethane (C2H6, is an organic compound which is very interesting from the neutronic point of view, in some respects better than liquid methane to produce subthermal neutrons, not only because it remains in liquid phase through a wider temperature range (Tf = 90.4 K, Tb = 184.6 K, but also because of its high protonic density together with its frequency spectrum with a low rotational energy band. Another material, Triphenylmethane is an hydrocarbon with formula C19H16 which has already been proposed as a good candidate for a cold moderator. Following one of the main research topics of the Neutron Physics Department of Centro Atómico Bariloche, we present here two ways to estimate the frequency spectrum which is needed to feed the NJOY nuclear data processing system in order to generate the scattering law of each desired material. For ethane, computer simulations of molecular dynamics were done, while for triphenylmethane existing experimental and calculated data were used to produce a new scattering kernel. With these models, cross section libraries were generated, and applied to neutron spectra calculation.

  6. The isotropic radio background revisited

    Energy Technology Data Exchange (ETDEWEB)

    Fornengo, Nicolao; Regis, Marco [Dipartimento di Fisica Teorica, Università di Torino, via P. Giuria 1, I–10125 Torino (Italy); Lineros, Roberto A. [Instituto de Física Corpuscular – CSIC/U. Valencia, Parc Científic, calle Catedrático José Beltrán, 2, E-46980 Paterna (Spain); Taoso, Marco, E-mail: fornengo@to.infn.it, E-mail: rlineros@ific.uv.es, E-mail: regis@to.infn.it, E-mail: taoso@cea.fr [Institut de Physique Théorique, CEA/Saclay, F-91191 Gif-sur-Yvette Cédex (France)

    2014-04-01

    We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky.

  7. The isotropic radio background revisited

    International Nuclear Information System (INIS)

    Fornengo, Nicolao; Regis, Marco; Lineros, Roberto A.; Taoso, Marco

    2014-01-01

    We present an extensive analysis on the determination of the isotropic radio background. We consider six different radio maps, ranging from 22 MHz to 2.3 GHz and covering a large fraction of the sky. The large scale emission is modeled as a linear combination of an isotropic component plus the Galactic synchrotron radiation and thermal bremsstrahlung. Point-like and extended sources are either masked or accounted for by means of a template. We find a robust estimate of the isotropic radio background, with limited scatter among different Galactic models. The level of the isotropic background lies significantly above the contribution obtained by integrating the number counts of observed extragalactic sources. Since the isotropic component dominates at high latitudes, thus making the profile of the total emission flat, a Galactic origin for such excess appears unlikely. We conclude that, unless a systematic offset is present in the maps, and provided that our current understanding of the Galactic synchrotron emission is reasonable, extragalactic sources well below the current experimental threshold seem to account for the majority of the brightness of the extragalactic radio sky

  8. Effect of the X-ray scattering anisotropy on the diffusion of photons in the frame of the transport theory

    International Nuclear Information System (INIS)

    Fernandez, J.E.; Molinari, V.G.; Sumini, M.

    1988-01-01

    In the frame of the multiple applications of X-ray techniques a detailed description of the photon transport under several boundary conditions in condensed media is of utmost importance. In this work the photon transport equation for a homogeneous specimen of infinite thickness is considered and an exact iterative solution is reported, which is universally valid for all types of interactions because of its independence of the shape of the interaction kernel. As a test probe we use a specially simple elastic scattering expression that renders possible the exact calculation of the first two orders of the solution. It is shown that the second order does not produce any significant improvement over the first one. Due to its particular characteristics, the first-order solution for the simplified kernel can be extended to include the form factor, thus giving a more realistic description of the coherent scattering of monochromatic radiation by bound electrons. The relevant effects of the scattering anisotropy are also placed in evidence when they are constrated with the isotropic solution calculated in the same way. (author) [pt

  9. Calculation of the Kernel scattering for thermal neutrons in H2O e D2O

    International Nuclear Information System (INIS)

    Leal, L.C.; Assis, J.T. de

    1981-01-01

    A computer code, using the Nelkin-and Butler models for the calculations of the Kernel scattering, was developed. Calculations of the thermal neutron flux in an homogeneous-and infinite medium with a 1 /v absorber in 30 energy groups were done and compared with experimental data. The reactors parameters calculated by the Hammer code (in the original version and with the new library generated by the authors' code) are presented. (E.G) [pt

  10. Comparison of electron dose-point kernels in water generated by the Monte Carlo codes, PENELOPE, GEANT4, MCNPX, and ETRAN.

    Science.gov (United States)

    Uusijärvi, Helena; Chouin, Nicolas; Bernhardt, Peter; Ferrer, Ludovic; Bardiès, Manuel; Forssell-Aronsson, Eva

    2009-08-01

    Point kernels describe the energy deposited at a certain distance from an isotropic point source and are useful for nuclear medicine dosimetry. They can be used for absorbed-dose calculations for sources of various shapes and are also a useful tool when comparing different Monte Carlo (MC) codes. The aim of this study was to compare point kernels calculated by using the mixed MC code, PENELOPE (v. 2006), with point kernels calculated by using the condensed-history MC codes, ETRAN, GEANT4 (v. 8.2), and MCNPX (v. 2.5.0). Point kernels for electrons with initial energies of 10, 100, 500, and 1 MeV were simulated with PENELOPE. Spherical shells were placed around an isotropic point source at distances from 0 to 1.2 times the continuous-slowing-down-approximation range (R(CSDA)). Detailed (event-by-event) simulations were performed for electrons with initial energies of less than 1 MeV. For 1-MeV electrons, multiple scattering was included for energy losses less than 10 keV. Energy losses greater than 10 keV were simulated in a detailed way. The point kernels generated were used to calculate cellular S-values for monoenergetic electron sources. The point kernels obtained by using PENELOPE and ETRAN were also used to calculate cellular S-values for the high-energy beta-emitter, 90Y, the medium-energy beta-emitter, 177Lu, and the low-energy electron emitter, 103mRh. These S-values were also compared with the Medical Internal Radiation Dose (MIRD) cellular S-values. The greatest differences between the point kernels (mean difference calculated for distances, electrons was 1.4%, 2.5%, and 6.9% for ETRAN, GEANT4, and MCNPX, respectively, compared to PENELOPE, if omitting the S-values when the activity was distributed on the cell surface for 10-keV electrons. The largest difference between the cellular S-values for the radionuclides, between PENELOPE and ETRAN, was seen for 177Lu (1.2%). There were large differences between the MIRD cellular S-values and those obtained from

  11. Cold moderator scattering kernels

    International Nuclear Information System (INIS)

    MacFarlane, R.E.

    1989-01-01

    New thermal-scattering-law files in ENDF format have been developed for solid methane, liquid methane liquid ortho- and para-hydrogen, and liquid ortho- and para-deuterium using up-to-date models that include such effects as incoherent elastic scattering in the solid, diffusion and hindered vibration and rotations in the liquids, and spin correlations for the hydrogen and deuterium. These files were generated with the new LEAPR module of the NJOY Nuclear Data Processing System. Other modules of this system were used to produce cross sections for these moderators in the correct format for the continuous-energy Monte Carlo code (MCNP) being used for cold-moderator-design calculations at the Los Alamos Neutron Scattering Center (LANSCE). 20 refs., 14 figs

  12. Mitigation of artifacts in rtm with migration kernel decomposition

    KAUST Repository

    Zhan, Ge

    2012-01-01

    The migration kernel for reverse-time migration (RTM) can be decomposed into four component kernels using Born scattering and migration theory. Each component kernel has a unique physical interpretation and can be interpreted differently. In this paper, we present a generalized diffraction-stack migration approach for reducing RTM artifacts via decomposition of migration kernel. The decomposition leads to an improved understanding of migration artifacts and, therefore, presents us with opportunities for improving the quality of RTM images.

  13. Three-dimensional radiative transfer in an isotropically scattering, plane-parallel medium: generalized X- and Y-functions

    International Nuclear Information System (INIS)

    Mueller, D.W.; Crosbie, A.L.

    2005-01-01

    The topic of this work is the generalized X- and Y-functions of multidimensional radiative transfer. The physical problem considered is spatially varying, collimated radiation incident on the upper boundary of an isotropically scattering, plane-parallel medium. An integral transform is used to reduce the three-dimensional transport equation to a one-dimensional form, and a modified Ambarzumian's method is used to derive coupled, integro-differential equations for the source functions at the boundaries of the medium. The resulting equations are said to be in double-integral form because the integration is over both angular variables. Numerical results are presented to illustrate the computational characteristics of the formulation

  14. Thermal neutron scattering kernels for sapphire and silicon single crystals

    International Nuclear Information System (INIS)

    Cantargi, F.; Granada, J.R.; Mayer, R.E.

    2015-01-01

    Highlights: • Thermal cross section libraries for sapphire and silicon single crystals were generated. • Debye model was used to represent the vibrational frequency spectra to feed the NJOY code. • Sapphire total cross section was measured at Centro Atómico Bariloche. • Cross section libraries were validated with experimental data available. - Abstract: Sapphire and silicon are materials usually employed as filters in facilities with thermal neutron beams. Due to the lack of the corresponding thermal cross section libraries for those materials, necessary in calculations performed in order to optimize beams for specific applications, here we present the generation of new thermal neutron scattering kernels for those materials. The Debye model was used in both cases to represent the vibrational frequency spectra required to feed the NJOY nuclear data processing system in order to produce the corresponding libraries in ENDF and ACE format. These libraries were validated with available experimental data, some from the literature and others obtained at the pulsed neutron source at Centro Atómico Bariloche

  15. Thermochemical equilibrium in a kernel of a UN TRISO coated fuel particle

    International Nuclear Information System (INIS)

    Kim, Young Min; Jo, C. K.; Lim, H. S.; Cho, M. S.; Lee, W. J.

    2012-01-01

    A coated fuel particle (CFP) with a uranium mononitride (UN) kernel has been recently considered as an advanced fuel option, such as in fully ceramic micro encapsulated (FCM) replacement fuel for light water reactors (LWRs). In FCM fuel, a large number of tri isotropic coated fuel particles (TRISOs) are embedded in a silicon carbide (SiC) matrix. Thermochemical equilibrium calculations can predict the chemical behaviors of a kernel in a TRISO of FCM fuel during irradiation. They give information on the kind and quantity of gases generated in a kernel during irradiation. This study treats the quantitative analysis of thermochemical equilibrium in a UN TRISO of FCM LWR fuel using HSC software

  16. Generation of gamma-ray streaming kernels through cylindrical ducts via Monte Carlo method

    International Nuclear Information System (INIS)

    Kim, Dong Su

    1992-02-01

    Since radiation streaming through penetrations is often the critical consideration in protection against exposure of personnel in a nuclear facility, it has been of great concern in radiation shielding design and analysis. Several methods have been developed and applied to the analysis of the radiation streaming in the past such as ray analysis method, single scattering method, albedo method, and Monte Carlo method. But they may be used for order-of-magnitude calculations and where sufficient margin is available, except for the Monte Carlo method which is accurate but requires a lot of computing time. This study developed a Monte Carlo method and constructed a data library of solutions using the Monte Carlo method for radiation streaming through a straight cylindrical duct in concrete walls of a broad, mono-directional, monoenergetic gamma-ray beam of unit intensity. The solution named as plane streaming kernel is the average dose rate at duct outlet and was evaluated for 20 source energies from 0 to 10 MeV, 36 source incident angles from 0 to 70 degrees, 5 duct radii from 10 to 30 cm, and 16 wall thicknesses from 0 to 100 cm. It was demonstrated that average dose rate due to an isotropic point source at arbitrary positions can be well approximated using the plane streaming kernel with acceptable error. Thus, the library of the plane streaming kernels can be used for the accurate and efficient analysis of radiation streaming through a straight cylindrical duct in concrete walls due to arbitrary distributions of gamma-ray sources

  17. New developments in the Csub(N) method

    International Nuclear Information System (INIS)

    Grandjean, Paul; Kavenoky, Alain.

    1975-01-01

    The most recent developments of the Csub(N) method used for solving transport equations are presented: treatment of the Rayleigh scattering kernel in plane geometry and of the cylindrical problems with an isotropic scattering law [fr

  18. Primary and scattering contributions to beta scaled dose point kernels by means of Monte Carlo simulations

    International Nuclear Information System (INIS)

    Valente, Mauro; Botta, Francesca; Pedroli, Guido

    2012-01-01

    Beta-emitters have proved to be appropriate for radioimmunotherapy. The dosimetric characterization of each radionuclide has to be carefully investigated. One usual and practical dosimetric approach is the calculation of dose distribution from a unit point source emitting particles according to any radionuclide of interest, which is known as dose point kernel. Absorbed dose distributions are due to primary and radiation scattering contributions. This work presented a method capable of performing dose distributions for nuclear medicine dosimetry by means of Monte Carlo methods. Dedicated subroutines have been developed in order to separately compute primary and scattering contributions to the total absorbed dose, performing particle transport up to 1 keV or least. Preliminarily, the suitability of the calculation method has been satisfactory, being tested for monoenergetic sources, and it was further applied to the characterization of different beta-minus radionuclides of nuclear medicine interests for radioimmunotherapy. (author)

  19. Fabrication and Characterization of Surrogate TRISO Particles Using 800μm ZrO2 Kernels

    Energy Technology Data Exchange (ETDEWEB)

    Jolly, Brian C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Helmreich, Grant [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cooley, Kevin M. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Dyer, John [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Terrani, Kurt [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-07-01

    In support of fully ceramic microencapsulated (FCM) fuel development, coating development work is ongoing at Oak Ridge National Laboratory (ORNL) to produce tri-structural isotropic (TRISO) coated fuel particles with both UN kernels and surrogate (uranium-free) kernels. The nitride kernels are used to increase fissile density in these SiC-matrix fuel pellets with details described elsewhere. The surrogate TRISO particles are necessary for separate effects testing and for utilization in the consolidation process development. This report focuses on the fabrication and characterization of surrogate TRISO particles which use 800μm in diameter ZrO2 microspheres as the kernel.

  20. A relationship between Gel'fand-Levitan and Marchenko kernels

    International Nuclear Information System (INIS)

    Kirst, T.; Von Geramb, H.V.; Amos, K.A.

    1989-01-01

    An integral equation which relates the output kernels of the Gel'fand-Levitan and Marchenko inverse scattering equations is specified. Structural details of this integral equation are studied when the S-matrix is a rational function, and the output kernels are separable in terms of Bessel, Hankel and Jost solutions. 4 refs

  1. Retrieval of collision kernels from the change of droplet size distributions with linear inversion

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Ryo; Takahashi, Keiko [Earth Simulator Center, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama Kanagawa 236-0001 (Japan); Matsuda, Keigo; Kurose, Ryoichi; Komori, Satoru [Department of Mechanical Engineering and Science, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: onishi.ryo@jamstec.go.jp, E-mail: matsuda.keigo@t03.mbox.media.kyoto-u.ac.jp, E-mail: takahasi@jamstec.go.jp, E-mail: kurose@mech.kyoto-u.ac.jp, E-mail: komori@mech.kyoto-u.ac.jp

    2008-12-15

    We have developed a new simple inversion scheme for retrieving collision kernels from the change of droplet size distribution due to collision growth. Three-dimensional direct numerical simulations (DNS) of steady isotropic turbulence with colliding droplets are carried out in order to investigate the validity of the developed inversion scheme. In the DNS, air turbulence is calculated using a quasi-spectral method; droplet motions are tracked in a Lagrangian manner. The initial droplet size distribution is set to be equivalent to that obtained in a wind tunnel experiment. Collision kernels retrieved by the developed inversion scheme are compared to those obtained by the DNS. The comparison shows that the collision kernels can be retrieved within 15% error. This verifies the feasibility of retrieving collision kernels using the present inversion scheme.

  2. Arbitrary quadratures determination of the monoenergetic neutron density in an homogeneous finite sphere with isotropic scattering

    International Nuclear Information System (INIS)

    Sanchez G, J.

    2015-09-01

    The solution of the so-called Canonical problems of neutron transport theory has been given by Case, who developed a method akin to the classical eigenfunction expansion procedure, extended to admit singular eigenfunctions. The solution is given as a set consisting of a Fredholm integral equation coupled with a transcendental equation, which has to be solved for the expansion coefficients by iteration. CASE's method make extensive use of the results of the theory of functions of a complex variable and many successful approaches to solve in an approximate form the above mentioned set have been reported in the literature. We present here an entirely different approach which deals with the canonical problems in a more direct and elementary manner. As far as we know, the original idea for the latter method is due to Carlvik who devised the escape probability approximation to the solution of the neutron transport equation in its integral form. In essence, the procedure consists in assuming a sectionally constant form of the neutron density that in turn yields a set of linear algebraic equations obeyed by the assumed constant values of the density. Very well established techniques of numerical analysis for the solution of integral equations consist in independent approaches that generalize the sectionally constant approach by assuming a sectionally low degree polynomial for the unknown function. This procedure also known as the arbitrary quadratures method is especially suited to deal with cases where the kernel of the integral equation is singular. The author wishes to present the results obtained with the arbitrary quadratures method for the numerical calculation of the monoenergetic neutron density in a critical, homogeneous sphere of finite radius with isotropic scattering. The singular integral equation obeyed by the neutron density in the critical sphere is introduced, an outline of the method's main features is given, and tables and graphs of the density

  3. Arbitrary quadratures determination of the monoenergetic neutron density in an homogeneous finite sphere with isotropic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez G, J., E-mail: julian.sanchez@inin.gob.mx [ININ, Carretera Mexico-Toluca s/n, 52750 Ocoyoacac, Estado de Mexico (Mexico)

    2015-09-15

    The solution of the so-called Canonical problems of neutron transport theory has been given by Case, who developed a method akin to the classical eigenfunction expansion procedure, extended to admit singular eigenfunctions. The solution is given as a set consisting of a Fredholm integral equation coupled with a transcendental equation, which has to be solved for the expansion coefficients by iteration. CASE's method make extensive use of the results of the theory of functions of a complex variable and many successful approaches to solve in an approximate form the above mentioned set have been reported in the literature. We present here an entirely different approach which deals with the canonical problems in a more direct and elementary manner. As far as we know, the original idea for the latter method is due to Carlvik who devised the escape probability approximation to the solution of the neutron transport equation in its integral form. In essence, the procedure consists in assuming a sectionally constant form of the neutron density that in turn yields a set of linear algebraic equations obeyed by the assumed constant values of the density. Very well established techniques of numerical analysis for the solution of integral equations consist in independent approaches that generalize the sectionally constant approach by assuming a sectionally low degree polynomial for the unknown function. This procedure also known as the arbitrary quadratures method is especially suited to deal with cases where the kernel of the integral equation is singular. The author wishes to present the results obtained with the arbitrary quadratures method for the numerical calculation of the monoenergetic neutron density in a critical, homogeneous sphere of finite radius with isotropic scattering. The singular integral equation obeyed by the neutron density in the critical sphere is introduced, an outline of the method's main features is given, and tables and graphs of the density

  4. Ceramography of Irradiated tristructural isotropic (TRISO) Fuel from the AGR-2 Experiment

    Energy Technology Data Exchange (ETDEWEB)

    Rice, Francine Joyce [Idaho National Lab. (INL), Idaho Falls, ID (United States); Stempien, John Dennis [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Ceramography was performed on cross sections from four tristructural isotropic (TRISO) coated particle fuel compacts taken from the AGR-2 experiment, which was irradiated between June 2010 and October 2013 in the Advanced Test Reactor (ATR). The fuel compacts examined in this study contained TRISO-coated particles with either uranium oxide (UO2) kernels or uranium oxide/uranium carbide (UCO) kernels that were irradiated to final burnup values between 9.0 and 11.1% FIMA. These examinations are intended to explore kernel and coating morphology evolution during irradiation. This includes kernel porosity, swelling, and migration, and irradiation-induced coating fracture and separation. Variations in behavior within a specific cross section, which could be related to temperature or burnup gradients within the fuel compact, are also explored. The criteria for categorizing post-irradiation particle morphologies developed for AGR-1 ceramographic exams, was applied to the particles in the AGR-2 compacts particles examined. Results are compared with similar investigations performed as part of the earlier AGR-1 irradiation experiment. This paper presents the results of the AGR-2 examinations and discusses the key implications for fuel irradiation performance.

  5. Improved age-diffusion model for low-energy electron transport in solids. I. Theory

    International Nuclear Information System (INIS)

    Devooght, J.; Dubus, A.; Dehaes, J.C.

    1987-01-01

    We have developed in this paper a semianalytical electron transport model designed for parametric studies of secondary-electron emission induced by low-energy electrons (keV range) and by fast light ions (100 keV range). The primary-particle transport is assumed to be known and to give rise to an internal electron source. The importance of the nearly isotropic elastic scattering in the secondary-electron energy range (50 eV) and the slowing-down process strongly reduce the influence of the anisotropy of the internal electron source, and the internal electron flux is nearly isotropic as is evidenced by the experimental results. The differential energy behavior of the inelastic scattering kernel is very complicated and the real kernel is replaced by a synthetic scattering kernel of which parameters are obtained by energy and angle moments conservation. Through a P 1 approximation and the use of the synthetic scattering kernel, the Boltzmann equation is approximated by a diffusion--slowing-down equation for the isotropic part of the internal electron flux. The energy-dependent partial reflection boundary condition reduces to a Neumann-Dirichlet boundary condition. An analytical expression for the Green's function of the diffusion--slowing-down equation with the surface boundary condition is obtained by means of approximations close to the age-diffusion theory and the model allows for transient conditions. Independently from the ''improved age-diffusion'' model, a correction formula is developed in order to take into account the backscattering of primary electrons for an incident-electron problem

  6. Single-Fiber Reflectance Spectroscopy of Isotropic-Scattering Medium: An Analytic Perspective to the Ratio-of-Remission in Steady-State Measurements

    Directory of Open Access Journals (Sweden)

    Daqing Piao

    2014-12-01

    Full Text Available Recent focused Monte Carlo and experimental studies on steady-state single-fiber reflectance spectroscopy (SfRS from a biologically relevant scattering medium have revealed that, as the dimensionless reduced scattering of the medium increases, the SfRS intensity increases monotonically until reaching a plateau. The SfRS signal is semi-empirically decomposed to the product of three contributing factors, including a ratio-of-remission (RoR term that refers to the ratio of photons remitting from the medium and crossing the fiber-medium interface over the total number of photons launched into the medium. The RoR is expressed with respect to the dimensionless reduced scattering parameter , where  is the reduced scattering coefficient of the medium and  is the diameter of the probing fiber. We develop in this work, under the assumption of an isotropic-scattering medium, a method of analytical treatment that will indicate the pattern of RoR as a function of the dimensionless reduced scattering of the medium. The RoR is derived in four cases, corresponding to in-medium (applied to interstitial probing of biological tissue or surface-based (applied to contact-probing of biological tissue SfRS measurements using straight-polished or angle-polished fiber. The analytically arrived surface-probing RoR corresponding to single-fiber probing using a 15° angle-polished fiber over the range of  agrees with previously reported similarly configured experimental measurement from a scattering medium that has a Henyey–Greenstein scattering phase function with an anisotropy factor of 0.8. In cases of a medium scattering light anisotropically, we propose how the treatment may be furthered to account for the scattering anisotropy using the result of a study of light scattering close to the point-of-entry by Vitkin et al. (Nat. Commun. 2011, doi:10.1038/ncomms1599.

  7. Singular characteristic tracking algorithm for improved solution accuracy of the discrete ordinates method with isotropic scattering

    International Nuclear Information System (INIS)

    Duo, J. I.; Azmy, Y. Y.

    2007-01-01

    A new method, the Singular Characteristics Tracking algorithm, is developed to account for potential non-smoothness across the singular characteristics in the exact solution of the discrete ordinates approximation of the transport equation. Numerical results show improved rate of convergence of the solution to the discrete ordinates equations in two spatial dimensions with isotropic scattering using the proposed methodology. Unlike the standard Weighted Diamond Difference methods, the new algorithm achieves local convergence in the case of discontinuous angular flux along the singular characteristics. The method also significantly reduces the error for problems where the angular flux presents discontinuous spatial derivatives across these lines. For purposes of verifying the results, the Method of Manufactured Solutions is used to generate analytical reference solutions that permit estimating the local error in the numerical solution. (authors)

  8. Interparameter trade-off quantification and reduction in isotropic-elastic full-waveform inversion: synthetic experiments and Hussar land data set application

    Science.gov (United States)

    Pan, Wenyong; Geng, Yu; Innanen, Kristopher A.

    2018-05-01

    The problem of inverting for multiple physical parameters in the subsurface using seismic full-waveform inversion (FWI) is complicated by interparameter trade-off arising from inherent ambiguities between different physical parameters. Parameter resolution is often characterized using scattering radiation patterns, but these neglect some important aspects of interparameter trade-off. More general analysis and mitigation of interparameter trade-off in isotropic-elastic FWI is possible through judiciously chosen multiparameter Hessian matrix-vector products. We show that products of multiparameter Hessian off-diagonal blocks with model perturbation vectors, referred to as interparameter contamination kernels, are central to the approach. We apply the multiparameter Hessian to various vectors designed to provide information regarding the strengths and characteristics of interparameter contamination, both locally and within the whole volume. With numerical experiments, we observe that S-wave velocity perturbations introduce strong contaminations into density and phase-reversed contaminations into P-wave velocity, but themselves experience only limited contaminations from other parameters. Based on these findings, we introduce a novel strategy to mitigate the influence of interparameter trade-off with approximate contamination kernels. Furthermore, we recommend that the local spatial and interparameter trade-off of the inverted models be quantified using extended multiparameter point spread functions (EMPSFs) obtained with pre-conditioned conjugate-gradient algorithm. Compared to traditional point spread functions, the EMPSFs appear to provide more accurate measurements for resolution analysis, by de-blurring the estimations, scaling magnitudes and mitigating interparameter contamination. Approximate eigenvalue volumes constructed with stochastic probing approach are proposed to evaluate the resolution of the inverted models within the whole model. With a synthetic

  9. Numerical prediction of heat transfer by natural convection and radiation in an enclosure filled with an isotropic scattering medium

    International Nuclear Information System (INIS)

    Moufekkir, F.; Moussaoui, M.A.; Mezrhab, A.; Naji, H.; Lemonnier, D.

    2012-01-01

    This paper deals with the numerical solution for natural convection and volumetric radiation in an isotropic scattering medium within a heated square cavity using a hybrid thermal lattice Boltzmann method (HTLBM). The multiple relaxation time lattice Boltzmann method (MRT-LBM) has been coupled to the finite difference method (FDM) to solve momentum and energy equations, while the discrete ordinates method (DOM) has been adopted to solve the radiative transfer equation (RTE) using the S8 quadrature. Based on these approaches, the effects of various influencing parameters such as the Rayleigh number (Ra), the wall emissivity (ε ι ), the Planck number (Pl), and the scattering albedo (ω), have been considered. The results presented in terms of isotherms, streamlines and averaged Nusselt number, show that in absence of radiation, the temperature and the flow fields are centro-symmetrics and the cavity core is thermally stratified. However, radiation causes an overall increase in the temperature and velocity gradients along both thermally active walls. The maximum heat transfer rate is obtained when the surfaces of the enclosure walls are regarded as blackbodies. It is also seen that the scattering medium can generate a multicellular flow.

  10. Analysis of the multi-component pseudo-pure-mode qP-wave inversion in vertical transverse isotropic (VTI) media

    KAUST Repository

    Djebbi, Ramzi

    2014-08-05

    Multi-parameter inversion in anisotropic media suffers from the inherent trade-off between the anisotropic parameters, even under the acoustic assumption. Multi-component data, often acquired nowadays in ocean bottom acquisition and land data, provide additional information capable of resolving anisotropic parameters under the acoustic approximation assumption. Based on Born scattering approximation, we develop formulas capable of characterizing the radiation patterns for the acoustic pseudo-pure mode P-waves. Though commonly reserved for the elastic fields, we use displacement fields to constrain the acoustic vertical transverse isotropic (VTI) representation of the medium. Using the asymptotic Green\\'s functions and a horizontal reflector we derive the radiation patterns for perturbations in the anisotropic media. The radiation pattern for the anellipticity parameter η is identically zero for the horizontal displacement. This allows us to dedicate this component to invert for velocity and δ. Computing the traveltime sensitivity kernels based on the unwrapped phase confirms the radiation patterns observations, and provide the model wavenumber behavior of the update.

  11. Collision kernels in the eikonal approximation for Lennard-Jones interaction potential

    International Nuclear Information System (INIS)

    Zielinska, S.

    1985-03-01

    The velocity changing collisions are conveniently described by collisional kernels. These kernels depend on an interaction potential and there is a necessity for evaluating them for realistic interatomic potentials. Using the collision kernels, we are able to investigate the redistribution of atomic population's caused by the laser light and velocity changing collisions. In this paper we present the method of evaluating the collision kernels in the eikonal approximation. We discuss the influence of the potential parameters Rsub(o)sup(i), epsilonsub(o)sup(i) on kernel width for a given atomic state. It turns out that unlike the collision kernel for the hard sphere model of scattering the Lennard-Jones kernel is not so sensitive to changes of Rsub(o)sup(i) as the previous one. Contrary to the general tendency of approximating collisional kernels by the Gaussian curve, kernels for the Lennard-Jones potential do not exhibit such a behaviour. (author)

  12. Multiregion, multigroup collision probability method with white boundary condition for light water reactor thermalization calculations

    International Nuclear Information System (INIS)

    Ozgener, B.; Ozgener, H.A.

    2005-01-01

    A multiregion, multigroup collision probability method with white boundary condition is developed for thermalization calculations of light water moderated reactors. Hydrogen scatterings are treated by Nelkin's kernel while scatterings from other nuclei are assumed to obey the free-gas scattering kernel. The isotropic return (white) boundary condition is applied directly by using the appropriate collision probabilities. Comparisons with alternate numerical methods show the validity of the present formulation. Comparisons with some experimental results indicate that the present formulation is capable of calculating disadvantage factors which are closer to the experimental results than alternative methods

  13. Assessing opportunities for physical activity in the built environment of children: interrelation between kernel density and neighborhood scale.

    Science.gov (United States)

    Buck, Christoph; Kneib, Thomas; Tkaczick, Tobias; Konstabel, Kenn; Pigeot, Iris

    2015-12-22

    Built environment studies provide broad evidence that urban characteristics influence physical activity (PA). However, findings are still difficult to compare, due to inconsistent measures assessing urban point characteristics and varying definitions of spatial scale. Both were found to influence the strength of the association between the built environment and PA. We simultaneously evaluated the effect of kernel approaches and network-distances to investigate the association between urban characteristics and physical activity depending on spatial scale and intensity measure. We assessed urban measures of point characteristics such as intersections, public transit stations, and public open spaces in ego-centered network-dependent neighborhoods based on geographical data of one German study region of the IDEFICS study. We calculated point intensities using the simple intensity and kernel approaches based on fixed bandwidths, cross-validated bandwidths including isotropic and anisotropic kernel functions and considering adaptive bandwidths that adjust for residential density. We distinguished six network-distances from 500 m up to 2 km to calculate each intensity measure. A log-gamma regression model was used to investigate the effect of each urban measure on moderate-to-vigorous physical activity (MVPA) of 400 2- to 9.9-year old children who participated in the IDEFICS study. Models were stratified by sex and age groups, i.e. pre-school children (2 to kernel approaches. Smallest variation in effect estimates over network-distances was found for kernel intensity measures based on isotropic and anisotropic cross-validated bandwidth selection. We found a strong variation in the association between the built environment and PA of children based on the choice of intensity measure and network-distance. Kernel intensity measures provided stable results over various scales and improved the assessment compared to the simple intensity measure. Considering different spatial

  14. Automatic performance tuning of parallel and accelerated seismic imaging kernels

    KAUST Repository

    Haberdar, Hakan

    2014-01-01

    With the increased complexity and diversity of mainstream high performance computing systems, significant effort is required to tune parallel applications in order to achieve the best possible performance for each particular platform. This task becomes more and more challenging and requiring a larger set of skills. Automatic performance tuning is becoming a must for optimizing applications such as Reverse Time Migration (RTM) widely used in seismic imaging for oil and gas exploration. An empirical search based auto-tuning approach is applied to the MPI communication operations of the parallel isotropic and tilted transverse isotropic kernels. The application of auto-tuning using the Abstract Data and Communication Library improved the performance of the MPI communications as well as developer productivity by providing a higher level of abstraction. Keeping productivity in mind, we opted toward pragma based programming for accelerated computation on latest accelerated architectures such as GPUs using the fairly new OpenACC standard. The same auto-tuning approach is also applied to the OpenACC accelerated seismic code for optimizing the compute intensive kernel of the Reverse Time Migration application. The application of such technique resulted in an improved performance of the original code and its ability to adapt to different execution environments.

  15. Unconventional application of the two-flux approximation for the calculation of the Ambartsumyan-Chandrasekhar function and the angular spectrum of the backward-scattered radiation for a semi-infinite isotropically scattering medium

    Science.gov (United States)

    Remizovich, V. S.

    2010-06-01

    It is commonly accepted that the Schwarzschild-Schuster two-flux approximation (1905, 1914) can be employed only for the calculation of the energy characteristics of the radiation field (energy density and energy flux density) and cannot be used to characterize the angular distribution of radiation field. However, such an inference is not valid. In several cases, one can calculate the radiation intensity inside matter and the reflected radiation with the aid of this simplest approximation in the transport theory. In this work, we use the results of the simplest one-parameter variant of the two-flux approximation to calculate the angular distribution (reflection function) of the radiation reflected by a semi-infinite isotropically scattering dissipative medium when a relatively broad beam is incident on the medium at an arbitrary angle relative to the surface. We do not employ the invariance principle and demonstrate that the reflection function exhibits the multiplicative property. It can be represented as a product of three functions: the reflection function corresponding to the single scattering and two identical h functions, which have the same physical meaning as the Ambartsumyan-Chandrasekhar function ( H) has. This circumstance allows a relatively easy derivation of simple analytical expressions for the H function, total reflectance, and reflection function. We can easily determine the relative contribution of the true single scattering in the photon backscattering at an arbitrary probability of photon survival Λ. We compare all of the parameters of the backscattered radiation with the data resulting from the calculations using the exact theory of Ambartsumyan, Chandrasekhar, et al., which was developed decades after the two-flux approximation. Thus, we avoid the application of fine mathematical methods (the Wiener-Hopf method, the Case method of singular functions, etc.) and obtain simple analytical expressions for the parameters of the scattered radiation

  16. Robust Kernel (Cross-) Covariance Operators in Reproducing Kernel Hilbert Space toward Kernel Methods

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2016-01-01

    To the best of our knowledge, there are no general well-founded robust methods for statistical unsupervised learning. Most of the unsupervised methods explicitly or implicitly depend on the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). They are sensitive to contaminated data, even when using bounded positive definite kernels. First, we propose robust kernel covariance operator (robust kernel CO) and robust kernel crosscovariance operator (robust kern...

  17. Evaluation of a scattering correction method for high energy tomography

    Science.gov (United States)

    Tisseur, David; Bhatia, Navnina; Estre, Nicolas; Berge, Léonie; Eck, Daniel; Payan, Emmanuel

    2018-01-01

    One of the main drawbacks of Cone Beam Computed Tomography (CBCT) is the contribution of the scattered photons due to the object and the detector. Scattered photons are deflected from their original path after their interaction with the object. This additional contribution of the scattered photons results in increased measured intensities, since the scattered intensity simply adds to the transmitted intensity. This effect is seen as an overestimation in the measured intensity thus corresponding to an underestimation of absorption. This results in artifacts like cupping, shading, streaks etc. on the reconstructed images. Moreover, the scattered radiation provides a bias for the quantitative tomography reconstruction (for example atomic number and volumic mass measurement with dual-energy technique). The effect can be significant and difficult in the range of MeV energy using large objects due to higher Scatter to Primary Ratio (SPR). Additionally, the incident high energy photons which are scattered by the Compton effect are more forward directed and hence more likely to reach the detector. Moreover, for MeV energy range, the contribution of the photons produced by pair production and Bremsstrahlung process also becomes important. We propose an evaluation of a scattering correction technique based on the method named Scatter Kernel Superposition (SKS). The algorithm uses a continuously thickness-adapted kernels method. The analytical parameterizations of the scatter kernels are derived in terms of material thickness, to form continuously thickness-adapted kernel maps in order to correct the projections. This approach has proved to be efficient in producing better sampling of the kernels with respect to the object thickness. This technique offers applicability over a wide range of imaging conditions and gives users an additional advantage. Moreover, since no extra hardware is required by this approach, it forms a major advantage especially in those cases where

  18. Diffraction of SH-waves by topographic features in a layered transversely isotropic half-space

    Science.gov (United States)

    Ba, Zhenning; Liang, Jianwen; Zhang, Yanju

    2017-01-01

    The scattering of plane SH-waves by topographic features in a layered transversely isotropic (TI) half-space is investigated by using an indirect boundary element method (IBEM). Firstly, the anti-plane dynamic stiffness matrix of the layered TI half-space is established and the free fields are solved by using the direct stiffness method. Then, Green's functions are derived for uniformly distributed loads acting on an inclined line in a layered TI half-space and the scattered fields are constructed with the deduced Green's functions. Finally, the free fields are added to the scattered ones to obtain the global dynamic responses. The method is verified by comparing results with the published isotropic ones. Both the steady-state and transient dynamic responses are evaluated and discussed. Numerical results in the frequency domain show that surface motions for the TI media can be significantly different from those for the isotropic case, which are strongly dependent on the anisotropy property, incident angle and incident frequency. Results in the time domain show that the material anisotropy has important effects on the maximum duration and maximum amplitudes of the time histories.

  19. Multigroup computation of the temperature-dependent Resonance Scattering Model (RSM) and its implementation

    Energy Technology Data Exchange (ETDEWEB)

    Ghrayeb, S. Z. [Dept. of Mechanical and Nuclear Engineering, Pennsylvania State Univ., 230 Reber Building, Univ. Park, PA 16802 (United States); Ouisloumen, M. [Westinghouse Electric Company, 1000 Westinghouse Drive, Cranberry Township, PA 16066 (United States); Ougouag, A. M. [Idaho National Laboratory, MS-3860, PO Box 1625, Idaho Falls, ID 83415 (United States); Ivanov, K. N.

    2012-07-01

    A multi-group formulation for the exact neutron elastic scattering kernel is developed. This formulation is intended for implementation into a lattice physics code. The correct accounting for the crystal lattice effects influences the estimated values for the probability of neutron absorption and scattering, which in turn affect the estimation of core reactivity and burnup characteristics. A computer program has been written to test the formulation for various nuclides. Results of the multi-group code have been verified against the correct analytic scattering kernel. In both cases neutrons were started at various energies and temperatures and the corresponding scattering kernels were tallied. (authors)

  20. Microscopic description of the collisions between nuclei. [Generator coordinate kernels

    Energy Technology Data Exchange (ETDEWEB)

    Canto, L F; Brink, D M [Oxford Univ. (UK). Dept. of Theoretical Physics

    1977-03-21

    The equivalence of the generator coordinate method and the resonating group method is used in the derivation of two new methods to describe the scattering of spin-zero fragments. Both these methods use generator coordinate kernels, but avoid the problem of calculating the generator coordinate weight function in the asymptotic region. The scattering of two ..cap alpha..-particles is studied as an illustration.

  1. Isotropic blackbody cosmic microwave background radiation as evidence for a homogeneous universe.

    Science.gov (United States)

    Clifton, Timothy; Clarkson, Chris; Bull, Philip

    2012-08-03

    The question of whether the Universe is spatially homogeneous and isotropic on the largest scales is of fundamental importance to cosmology but has not yet been answered decisively. Surprisingly, neither an isotropic primary cosmic microwave background (CMB) nor combined observations of luminosity distances and galaxy number counts are sufficient to establish such a result. The inclusion of the Sunyaev-Zel'dovich effect in CMB observations, however, dramatically improves this situation. We show that even a solitary observer who sees an isotropic blackbody CMB can conclude that the Universe is homogeneous and isotropic in their causal past when the Sunyaev-Zel'dovich effect is present. Critically, however, the CMB must either be viewed for an extended period of time, or CMB photons that have scattered more than once must be detected. This result provides a theoretical underpinning for testing the cosmological principle with observations of the CMB alone.

  2. Generation of point isotropic source dose buildup factor data for the PFBR special concretes in a form compatible for usage in point kernel computer code QAD-CGGP

    International Nuclear Information System (INIS)

    Radhakrishnan, G.

    2003-01-01

    Full text: Around the PFBR (Prototype Fast Breeder Reactor) reactor assembly, in the peripheral shields special concretes of density 2.4 g/cm 3 and 3.6 g/cm 3 are to be used in complex geometrical shapes. Point-kernel computer code like QAD-CGGP, written for complex shield geometry comes in handy for the shield design optimization of peripheral shields. QAD-CGGP requires data base for the buildup factor data and it contains only ordinary concrete of density 2.3 g/cm 3 . In order to extend the data base for the PFBR special concretes, point isotropic source dose buildup factors have been generated by Monte Carlo method using the computer code MCNP-4A. For the above mentioned special concretes, buildup factor data have been generated in the energy range 0.5 MeV to 10.0 MeV with the thickness ranging from 1 mean free paths (mfp) to 40 mfp. Capo's formula fit of the buildup factor data compatible with QAD-CGGP has been attempted

  3. Low energy neutron scattering for energy dependent cross sections. General considerations

    Energy Technology Data Exchange (ETDEWEB)

    Rothenstein, W; Dagan, R [Technion-Israel Inst. of Tech., Haifa (Israel). Dept. of Mechanical Engineering

    1996-12-01

    We consider in this paper some aspects related to neutron scattering at low energies by nuclei which are subject to thermal agitation. The scattering is determined by a temperature dependent joint scattering kernel, or the corresponding joint probability density, which is a function of two variables, the neutron energy after scattering, and the cosine of the angle of scattering, for a specified energy and direction of motion of the neutron, before the interaction takes place. This joint probability density is easy to calculate, when the nucleus which causes the scattering of the neutron is at rest. It can be expressed by a delta function, since there is a one to one correspondence between the neutron energy change, and the cosine of the scattering angle. If the thermal motion of the target nucleus is taken into account, the calculation is rather more complicated. The delta function relation between the cosine of the angle of scattering and the neutron energy change is now averaged over the spectrum of velocities of the target nucleus, and becomes a joint kernel depending on both these variables. This function has a simple form, if the target nucleus behaves as an ideal gas, which has a scattering cross section independent of energy. An energy dependent scattering cross section complicates the treatment further. An analytic expression is no longer obtained for the ideal gas temperature dependent joint scattering kernel as a function of the neutron energy after the interaction and the cosine of the scattering angle. Instead the kernel is expressed by an inverse Fourier Transform of a complex integrand, which is averaged over the velocity spectrum of the target nucleus. (Abstract Truncated)

  4. New statistical model of inelastic fast neutron scattering

    International Nuclear Information System (INIS)

    Stancicj, V.

    1975-07-01

    A new statistical model for treating the fast neutron inelastic scattering has been proposed by using the general expressions of the double differential cross section in impuls approximation. The use of the Fermi-Dirac distribution of nucleons makes it possible to derive an analytical expression of the fast neutron inelastic scattering kernel including the angular momenta coupling. The obtained values of the inelastic fast neutron cross section calculated from the derived expression of the scattering kernel are in a good agreement with the experiments. A main advantage of the derived expressions is in their simplicity for the practical calculations

  5. Geometrical effects in X-mode scattering

    International Nuclear Information System (INIS)

    Bretz, N.

    1986-10-01

    One technique to extend microwave scattering as a probe of long wavelength density fluctuations in magnetically confined plasmas is to consider the launching and scattering of extraordinary (X-mode) waves nearly perpendicular to the field. When the incident frequency is less than the electron cyclotron frequency, this mode can penetrate beyond the ordinary mode cutoff at the plasma frequency and avoid significant distortions from density gradients typical of tokamak plasmas. In the more familiar case, where the incident and scattered waves are ordinary, the scattering is isotropic perpendicular to the field. However, because the X-mode polarization depends on the frequency ratios and the ray angle to the magnetic field, the coupling between the incident and scattered waves is complicated. This geometrical form factor must be unfolded from the observed scattering in order to interpret the scattering due to density fluctuations alone. The geometrical factor is calculated here for the special case of scattering perpendicular to the magnetic field. For frequencies above the ordinary mode cutoff the scattering is relatively isotropic, while below cutoff there are minima in the forward and backward directions which go to zero at approximately half the ordinary mode cutoff density

  6. Primary and scattering contributions to beta scaled dose point kernels by means of Monte Carlo simulations; Contribuicoes primaria e espalhada para dosimetria beta calculadas pelo dose point kernels empregando simulacoes pelo Metodo Monte Carlo

    Energy Technology Data Exchange (ETDEWEB)

    Valente, Mauro [CONICET - Consejo Nacional de Investigaciones Cientificas y Tecnicas de La Republica Argentina (Conicet), Buenos Aires, AR (Brazil); Botta, Francesca; Pedroli, Guido [European Institute of Oncology, Milan (Italy). Medical Physics Department; Perez, Pedro, E-mail: valente@famaf.unc.edu.ar [Universidad Nacional de Cordoba, Cordoba (Argentina). Fac. de Matematica, Astronomia y Fisica (FaMAF)

    2012-07-01

    Beta-emitters have proved to be appropriate for radioimmunotherapy. The dosimetric characterization of each radionuclide has to be carefully investigated. One usual and practical dosimetric approach is the calculation of dose distribution from a unit point source emitting particles according to any radionuclide of interest, which is known as dose point kernel. Absorbed dose distributions are due to primary and radiation scattering contributions. This work presented a method capable of performing dose distributions for nuclear medicine dosimetry by means of Monte Carlo methods. Dedicated subroutines have been developed in order to separately compute primary and scattering contributions to the total absorbed dose, performing particle transport up to 1 keV or least. Preliminarily, the suitability of the calculation method has been satisfactory, being tested for monoenergetic sources, and it was further applied to the characterization of different beta-minus radionuclides of nuclear medicine interests for radioimmunotherapy. (author)

  7. Determination of point isotropic buildup factors of gamma rays including incoherent and coherent scattering for aluminum, iron, lead, and water by discrete ordinates method

    International Nuclear Information System (INIS)

    Kitsos, S.; Assad, A.; Diop, C.M.; Nimal, J.C.

    1994-01-01

    Exposure and energy absorption buildup factors for aluminum, iron, lead, and water are calculated by the SNID discrete ordinates code for an isotropic point source in a homogeneous medium. The calculation of the buildup factors takes into account the effects of both bound-electron Compton (incoherent) and coherent (Rayleigh) scattering. A comparison with buildup factors from the literature shows that these two effects greatly increase the buildup factors for energies below a few hundred kilo-electron-volts, and thus the new results are improved relative to the experiment. This greater accuracy is due to the increase in the linear attenuation coefficient, which leads to the calculation of the buildup factors for a mean free path with a smaller shield thickness. On the other hand, for the same shield thickness, exposure increases when only incoherent scattering is included and decreases when only coherent scattering is included, so that the exposure finally decreases when both effects are included. Great care must also be taken when checking the approximations for gamma-ray deep-penetration transport calculations, as well as for the cross-section treatment and origin

  8. Simulation of isotropic scattering of charged particles by composed potentials

    CERN Document Server

    Gerasimov, O Y

    2003-01-01

    The analytical model of scattering of charged particles by a multicentered adiabatic potential which consists of the long-range Coulomb and short-range potentials is used for the parametrization of experiments of elastic low-energy proton-deuteron scattering. For the energies 2.26-13 MeV, the analytical expressions for the phase scattering function in terms of identical parameters which depend on the lengths and effective radii of proton-proton and proton-neutron scattering and on the effective size of deuteron are obtained. The results are in good qualitative accordance with experiments.

  9. Time-dependent integral transport equation kernels, leakage rates and collision rates for plane and spherical geometry

    International Nuclear Information System (INIS)

    Henderson, D.L.

    1987-01-01

    Time-dependent integral transport equation flux and current kernels for plane and spherical geometry are derived for homogeneous media. Using the multiple collision formalism, isotropic sources that are delta distributions in time are considered for four different problems. The plane geometry flux kernel is applied to a uniformly distributed source within an infinite medium and to a surface source in a semi-infinite medium. The spherical flux kernel is applied to a point source in an infinite medium and to a point source at the origin of a finite sphere. The time-dependent first-flight leakage rates corresponding to the existing steady state first-flight escape probabilities are computed by the Laplace transform technique assuming a delta distribution source in time. The case of a constant source emitting neutrons over a time interval, Δt, for a spatially uniform source is obtained for a slab and a sphere. Time-dependent first-flight leakage rates are also determined for the general two region spherical medium problem for isotropic sources with a delta distribution in time uniformly distributed throughout both the inner and outer regions. The time-dependent collision rates due to the uncollided neutrons are computed for a slab and a sphere using the time-dependent first-flight leakage rates and the time-dependent continuity equation. The case of a constant source emitting neutrons over a time interval, Δt, is also considered

  10. Estimation of biological parameters of marine organisms using linear and nonlinear acoustic scattering model-based inversion methods.

    Science.gov (United States)

    Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H

    2016-05-01

    The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.

  11. Multiple scattering processes: inverse and direct

    International Nuclear Information System (INIS)

    Kagiwada, H.H.; Kalaba, R.; Ueno, S.

    1975-01-01

    The purpose of the work is to formulate inverse problems in radiative transfer, to introduce the functions b and h as parameters of internal intensity in homogeneous slabs, and to derive initial value problems to replace the more traditional boundary value problems and integral equations of multiple scattering with high computational efficiency. The discussion covers multiple scattering processes in a one-dimensional medium; isotropic scattering in homogeneous slabs illuminated by parallel rays of radiation; the theory of functions b and h in homogeneous slabs illuminated by isotropic sources of radiation either at the top or at the bottom; inverse and direct problems of multiple scattering in slabs including internal sources; multiple scattering in inhomogeneous media, with particular reference to inverse problems for estimation of layers and total thickness of inhomogeneous slabs and to multiple scattering problems with Lambert's law and specular reflectors underlying slabs; and anisotropic scattering with reduction of the number of relevant arguments through axially symmetric fields and expansion in Legendre functions. Gaussian quadrature data for a seven point formula, a FORTRAN program for computing the functions b and h, and tables of these functions supplement the text

  12. The scattering properties of anisotropic dielectric spheres on electromagnetic waves

    International Nuclear Information System (INIS)

    Chen Hui; Zhang Weiyi; Wang Zhenlin; Ming Naiben

    2004-01-01

    The scattering coefficients of spheres with dielectric anisotropy are calculated analytically in this paper using the perturbation method. It is found that the different modes of vector spherical harmonics and polarizations are coupled together in the scattering coefficients (c-matrix) in contrast to the isotropic case where all modes are decoupled from each other. The generalized c-matrix is then incorporated into our codes for a vector wave multiple scattering program; the preliminary results on face centred cubic structure show that dielectric anisotropy reduces the symmetry of the scattering c-matrix and removes the degeneracy in photonic band structures composed of isotropic dielectric spheres

  13. Effective exchange potentials for electronically inelastic scattering

    International Nuclear Information System (INIS)

    Schwenke, D.W.; Staszewska, G.; Truhlar, D.G.

    1983-01-01

    We propose new methods for solving the electron scattering close coupling equations employing equivalent local exchange potentials in place of the continuum-multiconfiguration-Hartree--Fock-type exchange kernels. The local exchange potentials are Hermitian. They have the correct symmetry for any symmetries of excited electronic states included in the close coupling expansion, and they have the same limit at very high energy as previously employed exchange potentials. Comparison of numerical calculations employing the new exchange potentials with the results obtained with the standard nonlocal exchange kernels shows that the new exchange potentials are more accurate than the local exchange approximations previously available for electronically inelastic scattering. We anticipate that the new approximations will be most useful for intermediate-energy electronically inelastic electron--molecule scattering

  14. Quantum scattering at low energies

    DEFF Research Database (Denmark)

    Derezinski, Jan; Skibsted, Erik

    2009-01-01

    For a class of negative slowly decaying potentials, including V(x):=−γ|x|−μ with 0quantum mechanical scattering theory in the low-energy regime. Using appropriate modifiers of the Isozaki–Kitada type we show that scattering theory is well behaved on the whole continuous spectrum...... of the Hamiltonian, including the energy 0. We show that the modified scattering matrices S(λ) are well-defined and strongly continuous down to the zero energy threshold. Similarly, we prove that the modified wave matrices and generalized eigenfunctions are norm continuous down to the zero energy if we use...... of the kernel of S(λ) experiences an abrupt change from passing from positive energies λ to the limiting energy λ=0 . This change corresponds to the behaviour of the classical orbits. Under stronger conditions one can extract the leading term of the asymptotics of the kernel of S(λ) at its singularities....

  15. Numerical study of the thermal degradation of isotropic and anisotropic polymeric materials

    Energy Technology Data Exchange (ETDEWEB)

    Soler, E. [Departamento de Lenguajes y Ciencias de la Computacion, ETSI Informatica, Universidad de Malaga, 29071 Malaga (Spain); Ramos, J.I. [Room I-320-D, ETS Ingenieros Industriales, Universidad de Malaga, Plaza El Ejido, s/n, 29013 Malaga (Spain)

    2005-08-01

    The thermal degradation of two-dimensional isotropic, orthotropic and anisotropic polymeric materials is studied numerically by means of a second-order accurate (in both space and time) linearly implicit finite difference formulation which results in linear algebraic equations at each time step. It is shown that, for both isotropic and orthotropic composites, the monomer mass diffusion tensor plays a role in initiating the polymerization kinetics, the formation of a polymerization kernel and the initial front propagation, whereas the later stages of the polymerization are nearly independent of the monomer mass diffusion tensor. In anisotropic polymeric composites, it has been found that the monomer mass diffusion tensor plays a paramount role in determining the initial stages of the polymerization and the subsequent propagation of the polymerization front, the direction and speed of propagation of which are found to be related to the principal directions of both the monomer mass and the heat diffusion tensors. It is also shown that the polymerization time and temperatures depend strongly on the anisotropy of the mass and heat diffusion tensors. (authors)

  16. Anisotropy function for proton-proton elastic scattering

    International Nuclear Information System (INIS)

    Saleem, Mohammad; Fazal-e-Aleem; Azhar, I.A.

    1990-01-01

    By using the generalized Chou-Yang model and the experimental data on pp elastic scattering at 53 GeV, the anisotropy function which reflects the non-isotropic nature of elastic scattering is computed for the reaction pp→pp. (author)

  17. A general framework and review of scatter correction methods in cone beam CT. Part 2: Scatter estimation approaches

    International Nuclear Information System (INIS)

    Ruehrnschopf and, Ernst-Peter; Klingenbeck, Klaus

    2011-01-01

    The main components of scatter correction procedures are scatter estimation and a scatter compensation algorithm. This paper completes a previous paper where a general framework for scatter compensation was presented under the prerequisite that a scatter estimation method is already available. In the current paper, the authors give a systematic review of the variety of scatter estimation approaches. Scatter estimation methods are based on measurements, mathematical-physical models, or combinations of both. For completeness they present an overview of measurement-based methods, but the main topic is the theoretically more demanding models, as analytical, Monte-Carlo, and hybrid models. Further classifications are 3D image-based and 2D projection-based approaches. The authors present a system-theoretic framework, which allows to proceed top-down from a general 3D formulation, by successive approximations, to efficient 2D approaches. A widely useful method is the beam-scatter-kernel superposition approach. Together with the review of standard methods, the authors discuss their limitations and how to take into account the issues of object dependency, spatial variance, deformation of scatter kernels, external and internal absorbers. Open questions for further investigations are indicated. Finally, the authors refer on some special issues and applications, such as bow-tie filter, offset detector, truncated data, and dual-source CT.

  18. Anisotropy function for proton-proton elastic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Saleem, Mohammad; Fazal-e-Aleem; Azhar, I.A. (Punjab Univ., Lahore (Pakistan). Centre for High Energy Physics)

    1990-07-01

    By using the generalized Chou-Yang model and the experimental data on pp elastic scattering at 53 GeV, the anisotropy function which reflects the non-isotropic nature of elastic scattering is computed for the reaction pp{yields}pp. (author).

  19. Volume integral equation for electromagnetic scattering: Rigorous derivation and analysis for a set of multilayered particles with piecewise-smooth boundaries in a passive host medium

    Science.gov (United States)

    Yurkin, Maxim A.; Mishchenko, Michael I.

    2018-04-01

    We present a general derivation of the frequency-domain volume integral equation (VIE) for the electric field inside a nonmagnetic scattering object from the differential Maxwell equations, transmission boundary conditions, radiation condition at infinity, and locally-finite-energy condition. The derivation applies to an arbitrary spatially finite group of particles made of isotropic materials and embedded in a passive host medium, including those with edges, corners, and intersecting internal interfaces. This is a substantially more general type of scatterer than in all previous derivations. We explicitly treat the strong singularity of the integral kernel, but keep the entire discussion accessible to the applied scattering community. We also consider the known results on the existence and uniqueness of VIE solution and conjecture a general sufficient condition for that. Finally, we discuss an alternative way of deriving the VIE for an arbitrary object by means of a continuous transformation of the everywhere smooth refractive-index function into a discontinuous one. Overall, the paper examines and pushes forward the state-of-the-art understanding of various analytical aspects of the VIE.

  20. Inverse radiation problem of temperature distribution in one-dimensional isotropically scattering participating slab with variable refractive index

    International Nuclear Information System (INIS)

    Namjoo, A.; Sarvari, S.M. Hosseini; Behzadmehr, A.; Mansouri, S.H.

    2009-01-01

    In this paper, an inverse analysis is performed for estimation of source term distribution from the measured exit radiation intensities at the boundary surfaces in a one-dimensional absorbing, emitting and isotropically scattering medium between two parallel plates with variable refractive index. The variation of refractive index is assumed to be linear. The radiative transfer equation is solved by the constant quadrature discrete ordinate method. The inverse problem is formulated as an optimization problem for minimizing an objective function which is expressed as the sum of square deviations between measured and estimated exit radiation intensities at boundary surfaces. The conjugate gradient method is used to solve the inverse problem through an iterative procedure. The effects of various variables on source estimation are investigated such as type of source function, errors in the measured data and system parameters, gradient of refractive index across the medium, optical thickness, single scattering albedo and boundary emissivities. The results show that in the case of noisy input data, variation of system parameters may affect the inverse solution, especially at high error values in the measured data. The error in measured data plays more important role than the error in radiative system parameters except the refractive index distribution; however the accuracy of source estimation is very sensitive toward error in refractive index distribution. Therefore, refractive index distribution and measured exit intensities should be measured accurately with a limited error bound, in order to have an accurate estimation of source term in a graded index medium.

  1. Locally linear approximation for Kernel methods : the Railway Kernel

    OpenAIRE

    Muñoz, Alberto; González, Javier

    2008-01-01

    In this paper we present a new kernel, the Railway Kernel, that works properly for general (nonlinear) classification problems, with the interesting property that acts locally as a linear kernel. In this way, we avoid potential problems due to the use of a general purpose kernel, like the RBF kernel, as the high dimension of the induced feature space. As a consequence, following our methodology the number of support vectors is much lower and, therefore, the generalization capab...

  2. Four-particle scattering with three-particle interactions

    International Nuclear Information System (INIS)

    Adhikari, S.K.

    1979-01-01

    The four-particle scattering formalism proposed independently by Alessandrini, by Mitra et al., by Rosenberg, and by Takahashi and Mishima is extended to include a possible three-particle interaction. The kernel of the new equations we get contain both two- and three-body connected parts and gets four-body connected after one iteration. On the other hand, the kernel of the original equations in the absence of three-particle interactions does not have a two-body connected part. We also write scattering equations for the transition operators connecting the two-body fragmentation channels. They are generalization of the Sloan equations in the presence of three-particle interactions. We indicate how to include approximately the effect of a weak three-particle interaction in a practical four-particle scattering calculation

  3. Selection and properties of alternative forming fluids for TRISO fuel kernel production

    Energy Technology Data Exchange (ETDEWEB)

    Baker, M.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); King, J.C., E-mail: kingjc@mines.edu [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Gorman, B.P. [Colorado School of Mines, 1500 Illinois St., Golden, CO 80401 (United States); Marshall, D.W. [Idaho National Laboratory, 2525 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83415 (United States)

    2013-01-15

    Highlights: Black-Right-Pointing-Pointer Forming fluid selection criteria developed for TRISO kernel production. Black-Right-Pointing-Pointer Ten candidates selected for further study. Black-Right-Pointing-Pointer Density, viscosity, and surface tension measured for first time. Black-Right-Pointing-Pointer Settling velocity and heat transfer rates calculated. Black-Right-Pointing-Pointer Three fluids recommended for kernel production testing. - Abstract: Current Very High Temperature Reactor (VHTR) designs incorporate TRi-structural ISOtropic (TRISO) fuel, which consists of a spherical fissile fuel kernel surrounded by layers of pyrolytic carbon and silicon carbide. An internal sol-gel process forms the fuel kernel using wet chemistry to produce uranium oxyhydroxide gel spheres by dropping a cold precursor solution into a hot column of trichloroethylene (TCE). Over time, gelation byproducts inhibit complete gelation, and the TCE must be purified or discarded. The resulting TCE waste stream contains both radioactive and hazardous materials and is thus considered a mixed hazardous waste. Changing the forming fluid to a non-hazardous alternative could greatly improve the economics of TRISO fuel kernel production. Selection criteria for a replacement forming fluid narrowed a list of {approx}10,800 chemicals to yield ten potential replacement forming fluids: 1-bromododecane, 1-bromotetradecane, 1-bromoundecane, 1-chlorooctadecane, 1-chlorotetradecane, 1-iododecane, 1-iodododecane, 1-iodohexadecane, 1-iodooctadecane, and squalane. The density, viscosity, and surface tension for each potential replacement forming fluid were measured as a function of temperature between 25 Degree-Sign C and 80 Degree-Sign C. Calculated settling velocities and heat transfer rates give an overall column height approximation. 1-bromotetradecane, 1-chlorooctadecane, and 1-iodododecane show the greatest promise as replacements, and future tests will verify their ability to form satisfactory

  4. Geometrical considerations in analyzing isotropic or anisotropic surface reflections.

    Science.gov (United States)

    Simonot, Lionel; Obein, Gael

    2007-05-10

    The bidirectional reflectance distribution function (BRDF) represents the evolution of the reflectance with the directions of incidence and observation. Today BRDF measurements are increasingly applied and have become important to the study of the appearance of surfaces. The representation and the analysis of BRDF data are discussed, and the distortions caused by the traditional representation of the BRDF in a Fourier plane are pointed out and illustrated for two theoretical cases: an isotropic surface and a brushed surface. These considerations will help characterize either the specular peak width of an isotropic rough surface or the main directions of the light scattered by an anisotropic rough surface without misinterpretations. Finally, what is believed to be a new space is suggested for the representation of the BRDF, which avoids the geometrical deformations and in numerous cases is more convenient for BRDF analysis.

  5. Extreme Scale FMM-Accelerated Boundary Integral Equation Solver for Wave Scattering

    KAUST Repository

    AbdulJabbar, Mustafa Abdulmajeed; Al Farhan, Mohammed; Al-Harthi, Noha A.; Chen, Rui; Yokota, Rio; Bagci, Hakan; Keyes, David E.

    2018-01-01

    scattering, which uses FMM as a matrix-vector multiplication inside the GMRES iterative method. Our FMM Helmholtz kernels treat nontrivial singular and near-field integration points. We implement highly optimized kernels for both shared and distributed memory

  6. Anisotropy function for pion-proton elastic scattering

    Energy Technology Data Exchange (ETDEWEB)

    Saleem, Mohammad; Fazal-e-Aleem; Rashid, Haris

    1988-09-01

    By using the generalised Chou-Yang model and the experimental data on ..pi../sup -/p elastic scattering at 200 GeV/c, the anisotropy function which reflects the non-isotropic nature of elastic scattering is computed for the reaction ..pi../sup -/p -> ..pi../sup -/p.

  7. Anisotropy function for pion-proton elastic scattering

    International Nuclear Information System (INIS)

    Saleem, Mohammad; Fazal-e-Aleem; Rashid, Haris

    1988-01-01

    By using the generalised Chou-Yang model and the experimental data on π - p elastic scattering at 200 GeV/c, the anisotropy function which reflects the non-isotropic nature of elastic scattering is computed for the reaction π - p → π - p. (author)

  8. Application of Van Hove theory to fast neutron inelastic scattering

    International Nuclear Information System (INIS)

    Stanicicj, V.

    1974-11-01

    The Vane Hove general theory of the double differential scattering cross section has been used to derive the particular expressions of the inelastic fast neutrons scattering kernel and scattering cross section. Since the considered energies of incoming neutrons being less than 10 MeV, it enables to use the Fermi gas model of nucleons. In this case it was easy to derive an analytical expression for the time-dependent correlation function of the nucleus. Further, by using an impulse approximation and a short-collision time approach, it was possible to derive the analytical expression of the scattering kernel and scattering cross section for the fast neutron inelastic scattering. The obtained expressions have been used for Fe nucleus. It has been shown a surprising agreement with the experiments. The main advantage of this theory is in its simplicity for some practical calculations and for some theoretical investigations of nuclear processes

  9. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  10. Kernel Machine SNP-set Testing under Multiple Candidate Kernels

    Science.gov (United States)

    Wu, Michael C.; Maity, Arnab; Lee, Seunggeun; Simmons, Elizabeth M.; Harmon, Quaker E.; Lin, Xinyi; Engel, Stephanie M.; Molldrem, Jeffrey J.; Armistead, Paul M.

    2013-01-01

    Joint testing for the cumulative effect of multiple single nucleotide polymorphisms grouped on the basis of prior biological knowledge has become a popular and powerful strategy for the analysis of large scale genetic association studies. The kernel machine (KM) testing framework is a useful approach that has been proposed for testing associations between multiple genetic variants and many different types of complex traits by comparing pairwise similarity in phenotype between subjects to pairwise similarity in genotype, with similarity in genotype defined via a kernel function. An advantage of the KM framework is its flexibility: choosing different kernel functions allows for different assumptions concerning the underlying model and can allow for improved power. In practice, it is difficult to know which kernel to use a priori since this depends on the unknown underlying trait architecture and selecting the kernel which gives the lowest p-value can lead to inflated type I error. Therefore, we propose practical strategies for KM testing when multiple candidate kernels are present based on constructing composite kernels and based on efficient perturbation procedures. We demonstrate through simulations and real data applications that the procedures protect the type I error rate and can lead to substantially improved power over poor choices of kernels and only modest differences in power versus using the best candidate kernel. PMID:23471868

  11. Accurate palm vein recognition based on wavelet scattering and spectral regression kernel discriminant analysis

    Science.gov (United States)

    Elnasir, Selma; Shamsuddin, Siti Mariyam; Farokhi, Sajad

    2015-01-01

    Palm vein recognition (PVR) is a promising new biometric that has been applied successfully as a method of access control by many organizations, which has even further potential in the field of forensics. The palm vein pattern has highly discriminative features that are difficult to forge because of its subcutaneous position in the palm. Despite considerable progress and a few practical issues, providing accurate palm vein readings has remained an unsolved issue in biometrics. We propose a robust and more accurate PVR method based on the combination of wavelet scattering (WS) with spectral regression kernel discriminant analysis (SRKDA). As the dimension of WS generated features is quite large, SRKDA is required to reduce the extracted features to enhance the discrimination. The results based on two public databases-PolyU Hyper Spectral Palmprint public database and PolyU Multi Spectral Palmprint-show the high performance of the proposed scheme in comparison with state-of-the-art methods. The proposed approach scored a 99.44% identification rate and a 99.90% verification rate [equal error rate (EER)=0.1%] for the hyperspectral database and a 99.97% identification rate and a 99.98% verification rate (EER=0.019%) for the multispectral database.

  12. Angle-domain inverse scattering migration/inversion in isotropic media

    Science.gov (United States)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  13. The Green's function approach to the neutron-inelastic-scattering determination of magnon dispersion relations for isotropic disordered magnets

    International Nuclear Information System (INIS)

    Czachor, A.; Al-Wahsh, H.

    1999-01-01

    Complete text of publication follows. To determine the neutron inelastic coherent scattering (MS) cross section for disordered magnets a system of equations of motion for the Green functions (GF) related to the localized-spin correlation-functions, has been exploited. The higher-order Green functions are decoupled using a symmetric 'equal access' (EA) form of the RPA decoupling scheme. The quasi-crystal approximation (QCA) was applied to construct the space-time Fourier transformed GF Q (ω)> related to neutron scattering. On assuming isotropy of the magnetic structure and a short range coupling between the spins (on the sphere approximation, OSA) we have found an explicit analytic form of this function. Poles of the Q (ω)> determine the dispersion relation ω = ω Q for elementary excitations, such as they are seen in the MS experiment - the positions of the MS profile maxima in the ω-Q space. Single formula for the dispersion relations derived here covers a variety of isotropic spin structures: in particular disordered 'longitudinal' ferrornagnets (ω ∼Q z , Q→ 0), disordered 'transverse' spin structures (ω ∼Q, Q→0), and some intermediate cases. For the system of spins coupled identically - the magnetization and the magnetic susceptibility calculated within the present EA-RPA approach do agree with the results of exact calculations. It provides an interesting insight into the nature of the RPA approach do agree with the results of exact calculations. It provides an interesting insight into the nature of the RPA - treatment of the localized spin dynamics. (author)

  14. Validation of Born Traveltime Kernels

    Science.gov (United States)

    Baig, A. M.; Dahlen, F. A.; Hung, S.

    2001-12-01

    Most inversions for Earth structure using seismic traveltimes rely on linear ray theory to translate observed traveltime anomalies into seismic velocity anomalies distributed throughout the mantle. However, ray theory is not an appropriate tool to use when velocity anomalies have scale lengths less than the width of the Fresnel zone. In the presence of these structures, we need to turn to a scattering theory in order to adequately describe all of the features observed in the waveform. By coupling the Born approximation to ray theory, the first order dependence of heterogeneity on the cross-correlated traveltimes (described by the Fréchet derivative or, more colourfully, the banana-doughnut kernel) may be determined. To determine for what range of parameters these banana-doughnut kernels outperform linear ray theory, we generate several random media specified by their statistical properties, namely the RMS slowness perturbation and the scale length of the heterogeneity. Acoustic waves are numerically generated from a point source using a 3-D pseudo-spectral wave propagation code. These waves are then recorded at a variety of propagation distances from the source introducing a third parameter to the problem: the number of wavelengths traversed by the wave. When all of the heterogeneity has scale lengths larger than the width of the Fresnel zone, ray theory does as good a job at predicting the cross-correlated traveltime as the banana-doughnut kernels do. Below this limit, wavefront healing becomes a significant effect and ray theory ceases to be effective even though the kernels remain relatively accurate provided the heterogeneity is weak. The study of wave propagation in random media is of a more general interest and we will also show our measurements of the velocity shift and the variance of traveltime compare to various theoretical predictions in a given regime.

  15. Evaluation of sintering effects on SiC-incorporated UO2 kernels under Ar and Ar–4%H2 environments

    International Nuclear Information System (INIS)

    Silva, Chinthaka M.; Lindemer, Terrence B.; Hunt, Rodney D.; Collins, Jack L.; Terrani, Kurt A.; Snead, Lance L.

    2013-01-01

    Silicon carbide (SiC) is suggested as an oxygen getter in UO 2 kernels used for tristructural isotropic (TRISO) particle fuels and to prevent kernel migration during irradiation. Scanning electron microscopy and X-ray diffractometry analyses performed on sintered kernels verified that an internal gelation process can be used to incorporate SiC in UO 2 fuel kernels. Even though the presence of UC in either argon (Ar) or Ar–4%H 2 sintered samples suggested a lowering of the SiC up to 3.5–1.4 mol%, respectively, the presence of other silicon-related chemical phases indicates the preservation of silicon in the kernels during sintering process. UC formation was presumed to occur by two reactions. The first was by the reaction of SiC with its protective SiO 2 oxide layer on SiC grains to produce volatile SiO and free carbon that subsequently reacted with UO 2 to form UC. The second process was direct UO 2 reaction with SiC grains to form SiO, CO, and UC. A slightly higher density and UC content were observed in the sample sintered in Ar–4%H 2 , but both atmospheres produced kernels with ∼95% of theoretical density. It is suggested that incorporating CO in the sintering gas could prevent UC formation and preserve the initial SiC content

  16. A new kernel discriminant analysis framework for electronic nose recognition

    International Nuclear Information System (INIS)

    Zhang, Lei; Tian, Feng-Chun

    2014-01-01

    Graphical abstract: - Highlights: • This paper proposes a new discriminant analysis framework for feature extraction and recognition. • The principle of the proposed NDA is derived mathematically. • The NDA framework is coupled with kernel PCA for classification. • The proposed KNDA is compared with state of the art e-Nose recognition methods. • The proposed KNDA shows the best performance in e-Nose experiments. - Abstract: Electronic nose (e-Nose) technology based on metal oxide semiconductor gas sensor array is widely studied for detection of gas components. This paper proposes a new discriminant analysis framework (NDA) for dimension reduction and e-Nose recognition. In a NDA, the between-class and the within-class Laplacian scatter matrix are designed from sample to sample, respectively, to characterize the between-class separability and the within-class compactness by seeking for discriminant matrix to simultaneously maximize the between-class Laplacian scatter and minimize the within-class Laplacian scatter. In terms of the linear separability in high dimensional kernel mapping space and the dimension reduction of principal component analysis (PCA), an effective kernel PCA plus NDA method (KNDA) is proposed for rapid detection of gas mixture components by an e-Nose. The NDA framework is derived in this paper as well as the specific implementations of the proposed KNDA method in training and recognition process. The KNDA is examined on the e-Nose datasets of six kinds of gas components, and compared with state of the art e-Nose classification methods. Experimental results demonstrate that the proposed KNDA method shows the best performance with average recognition rate and total recognition rate as 94.14% and 95.06% which leads to a promising feature extraction and multi-class recognition in e-Nose

  17. Sigma set scattering equations in nuclear reaction theory

    International Nuclear Information System (INIS)

    Kowalski, K.L.; Picklesimer, A.

    1982-01-01

    The practical applications of partially summed versions of the Rosenberg equations involving only special subsets (sigma sets) of the physical amplitudes are investigated with special attention to the Pauli principle. The requisite properties of the transformations from the pair labels to the set of partitions labeling the sigma set of asymptotic channels are established. New, well-defined, scattering integral equations for the antisymmetrized transition operators are found which possess much less coupling among the physically distinct channels than hitherto expected for equations with kernels of equal complexity. In several cases of physical interest in nuclear physics, a single connected-kernel equation is obtained for the relevant antisymmetrized elastic scattering amplitude

  18. Scattering of obliquely incident standing wave by a rotating transversely isotropic cylinder

    CSIR Research Space (South Africa)

    Shatalov, MY

    2006-05-01

    Full Text Available stream_source_info Shatalov2_2006.pdf.txt stream_content_type text/plain stream_size 15905 Content-Encoding UTF-8 stream_name Shatalov2_2006.pdf.txt Content-Type text/plain; charset=UTF-8 1 CSIR Material Science..., Tshwane University of Technology, South Africa. 2 CSIR Material Science and Manufacturing Abstract It is known that vibrating patterns of an isotropic cylinder, subjected to inertial rotation over the symmetry axis, precess in the direction...

  19. Absorption line profiles in a moving atmosphere - A single scattering linear perturbation theory

    Science.gov (United States)

    Hays, P. B.; Abreu, V. J.

    1989-01-01

    An integral equation is derived which linearly relates Doppler perturbations in the spectrum of atmospheric absorption features to the wind system which creates them. The perturbation theory is developed using a single scattering model, which is validated against a multiple scattering calculation. The nature and basic properties of the kernels in the integral equation are examined. It is concluded that the kernels are well behaved and that wind velocity profiles can be recovered using standard inversion techniques.

  20. Data-variant kernel analysis

    CERN Document Server

    Motai, Yuichi

    2015-01-01

    Describes and discusses the variants of kernel analysis methods for data types that have been intensely studied in recent years This book covers kernel analysis topics ranging from the fundamental theory of kernel functions to its applications. The book surveys the current status, popular trends, and developments in kernel analysis studies. The author discusses multiple kernel learning algorithms and how to choose the appropriate kernels during the learning phase. Data-Variant Kernel Analysis is a new pattern analysis framework for different types of data configurations. The chapters include

  1. Homogenous isotropic invisible cloak based on geometrical optics.

    Science.gov (United States)

    Sun, Jingbo; Zhou, Ji; Kang, Lei

    2008-10-27

    Invisible cloak derived from the coordinate transformation requires its constitutive material to be anisotropic. In this work, we present a cloak of graded-index isotropic material based on the geometrical optics theory. The cloak is realized by concentric multilayered structure with designed refractive index to achieve the low-scattering and smooth power-flow. Full-wave simulations on such a design of a cylindrical cloak are performed to demonstrate the cloaking ability to incident wave of any polarization. Using normal nature material with isotropy and low absorption, the cloak shows light on a practical path to stealth technology, especially that in the optical range.

  2. Homogenization of linearly anisotropic scattering cross sections in a consistent B1 heterogeneous leakage model

    International Nuclear Information System (INIS)

    Marleau, G.; Debos, E.

    1998-01-01

    One of the main problems encountered in cell calculations is that of spatial homogenization where one associates to an heterogeneous cell an homogeneous set of cross sections. The homogenization process is in fact trivial when a totally reflected cell without leakage is fully homogenized since it involved only a flux-volume weighting of the isotropic cross sections. When anisotropic leakages models are considered, in addition to homogenizing isotropic cross sections, the anisotropic scattering cross section must also be considered. The simple option, which consists of using the same homogenization procedure for both the isotropic and anisotropic components of the scattering cross section, leads to inconsistencies between the homogeneous and homogenized transport equation. Here we will present a method for homogenizing the anisotropic scattering cross sections that will resolve these inconsistencies. (author)

  3. Fracture analysis of a transversely isotropic high temperature superconductor strip based on real fundamental solutions

    Science.gov (United States)

    Gao, Zhiwen; Zhou, Youhe

    2015-04-01

    Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E-J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss-Lobatto-Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed.

  4. Formalism for neutron cross section covariances in the resonance region using kernel approximation

    Energy Technology Data Exchange (ETDEWEB)

    Oblozinsky, P.; Cho,Y-S.; Matoon,C.M.; Mughabghab,S.F.

    2010-04-09

    We describe analytical formalism for estimating neutron radiative capture and elastic scattering cross section covariances in the resolved resonance region. We use capture and scattering kernels as the starting point and show how to get average cross sections in broader energy bins, derive analytical expressions for cross section sensitivities, and deduce cross section covariances from the resonance parameter uncertainties in the recently published Atlas of Neutron Resonances. The formalism elucidates the role of resonance parameter correlations which become important if several strong resonances are located in one energy group. Importance of potential scattering uncertainty as well as correlation between potential scattering and resonance scattering is also examined. Practical application of the formalism is illustrated on {sup 55}Mn(n,{gamma}) and {sup 55}Mn(n,el).

  5. Anatomical image-guided fluorescence molecular tomography reconstruction using kernel method

    Science.gov (United States)

    Baikejiang, Reheman; Zhao, Yue; Fite, Brett Z.; Ferrara, Katherine W.; Li, Changqing

    2017-01-01

    Abstract. Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method. PMID:28464120

  6. Analysis of fast neutrons elastic moderator through exact solutions involving synthetic-kernels

    International Nuclear Information System (INIS)

    Moura Neto, C.; Chung, F.L.; Amorim, E.S.

    1979-07-01

    The computation difficulties in the transport equation solution applied to fast reactors can be reduced by the development of approximate models, assuming that the continuous moderation holds. Two approximations were studied. The first one was based on an expansion in Taylor's series (Fermi, Wigner, Greuling and Goertzel models), and the second involving the utilization of synthetic Kernels (Walti, Turinsky, Becker and Malaviya models). The flux obtained by the exact method is compared with the fluxes from the different models based on synthetic Kernels. It can be verified that the present study is realistic for energies smaller than the threshold for inelastic scattering, as well as in the resonance region. (Author) [pt

  7. Generalization of Asaoka method to linearly anisotropic scattering: benchmark data in cylindrical geometry

    International Nuclear Information System (INIS)

    Sanchez, Richard.

    1975-11-01

    The Integral Transform Method for the neutron transport equation has been developed in last years by Asaoka and others. The method uses Fourier transform techniques in solving isotropic one-dimensional transport problems in homogeneous media. The method has been extended to linearly anisotropic transport in one-dimensional homogeneous media. Series expansions were also obtained using Hembd techniques for the new anisotropic matrix elements in cylindrical geometry. Carlvik spatial-spherical harmonics method was generalized to solve the same problem. By applying a relation between the isotropic and anisotropic one-dimensional kernels, it was demonstrated that anisotropic matrix elements can be calculated by a linear combination of a few isotropic matrix elements. This means in practice that the anisotropic problem of order N with the N+2 isotropic matrix for the plane and spherical geometries, and N+1 isotropic matrix for cylindrical geometries can be solved. A method of solving linearly anisotropic one-dimensional transport problems in homogeneous media was defined by applying Mika and Stankiewicz observations: isotropic matrix elements were computed by Hembd series and anisotropic matrix elements then calculated from recursive relations. The method has been applied to albedo and critical problems in cylindrical geometries. Finally, a number of results were computed with 12-digit accuracy for use as benchmarks [fr

  8. Approximate kernel competitive learning.

    Science.gov (United States)

    Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang

    2015-03-01

    Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Spin-isotropic continuum of spin excitations in antiferromagnetically ordered Fe1.07Te

    Science.gov (United States)

    Song, Yu; Lu, Xingye; Regnault, L.-P.; Su, Yixi; Lai, Hsin-Hua; Hu, Wen-Jun; Si, Qimiao; Dai, Pengcheng

    2018-02-01

    Unconventional superconductivity typically emerges in the presence of quasidegenerate ground states, and the associated intense fluctuations are likely responsible for generating the superconducting state. Here we use polarized neutron scattering to study the spin space anisotropy of spin excitations in Fe1.07Te exhibiting bicollinear antiferromagnetic (AF) order, the parent compound of FeTe1 -xSex superconductors. We confirm that the low-energy spin excitations are transverse spin waves, consistent with a local-moment origin of the bicollinear AF order. While the ordered moments lie in the a b plane in Fe1.07Te , it takes less energy for them to fluctuate out of plane, similar to BaFe2As2 and NaFeAs. At energies above E ≳20 meV, we find magnetic scattering to be dominated by an isotropic continuum that persists up to at least 50 meV. Although the isotropic spin excitations cannot be ascribed to spin waves from a long-range-ordered local-moment antiferromagnet, the continuum can result from the bicollinear magnetic order ground state of Fe1.07Te being quasidegenerate with plaquette magnetic order.

  10. Covariant meson-baryon scattering with chiral and large Nc constraints

    International Nuclear Information System (INIS)

    Lutz, M.F.M.; Kolomeitsev, E.E.

    2001-05-01

    We give a review of recent progress on the application of the relativistic chiral SU(3) Lagrangian to meson-baryon scattering. It is shown that a combined chiral and 1/N c expansion of the Bethe-Salpeter interaction kernel leads to a good description of the kaon-nucleon, antikaon-nucleon and pion-nucleon scattering data typically up to laboratory momenta of p lab ≅ 500 MeV. We solve the covariant coupled channel Bethe-Salpeter equation with the interaction kernel truncated to chiral order Q 3 where we include only those terms which are leading in the large N c limit of QCD. (orig.)

  11. Classification With Truncated Distance Kernel.

    Science.gov (United States)

    Huang, Xiaolin; Suykens, Johan A K; Wang, Shuning; Hornegger, Joachim; Maier, Andreas

    2018-05-01

    This brief proposes a truncated distance (TL1) kernel, which results in a classifier that is nonlinear in the global region but is linear in each subregion. With this kernel, the subregion structure can be trained using all the training data and local linear classifiers can be established simultaneously. The TL1 kernel has good adaptiveness to nonlinearity and is suitable for problems which require different nonlinearities in different areas. Though the TL1 kernel is not positive semidefinite, some classical kernel learning methods are still applicable which means that the TL1 kernel can be directly used in standard toolboxes by replacing the kernel evaluation. In numerical experiments, the TL1 kernel with a pregiven parameter achieves similar or better performance than the radial basis function kernel with the parameter tuned by cross validation, implying the TL1 kernel a promising nonlinear kernel for classification tasks.

  12. Early Detection of Aspergillus parasiticus Infection in Maize Kernels Using Near-Infrared Hyperspectral Imaging and Multivariate Data Analysis

    Directory of Open Access Journals (Sweden)

    Xin Zhao

    2017-01-01

    Full Text Available Fungi infection in maize kernels is a major concern worldwide due to its toxic metabolites such as mycotoxins, thus it is necessary to develop appropriate techniques for early detection of fungi infection in maize kernels. Thirty-six sterilised maize kernels were inoculated each day with Aspergillus parasiticus from one to seven days, and then seven groups (D1, D2, D3, D4, D5, D6, D7 were determined based on the incubated time. Another 36 sterilised kernels without inoculation with fungi were taken as control (DC. Hyperspectral images of all kernels were acquired within spectral range of 921–2529 nm. Background, labels and bad pixels were removed using principal component analysis (PCA and masking. Separability computation for discrimination of fungal contamination levels indicated that the model based on the data of the germ region of individual kernels performed more effectively than on that of the whole kernels. Moreover, samples with a two-day interval were separable. Thus, four groups, DC, D1–2 (the group consisted of D1 and D2, D3–4 (D3 and D4, and D5–7 (D5, D6, and D7, were defined for subsequent classification. Two separate sample sets were prepared to verify the influence on a classification model caused by germ orientation, that is, germ up and the mixture of germ up and down with 1:1. Two smooth preprocessing methods (Savitzky-Golay smoothing, moving average smoothing and three scatter-correction methods (normalization, standard normal variate, and multiple scatter correction were compared, according to the performance of the classification model built by support vector machines (SVM. The best model for kernels with germ up showed the promising results with accuracies of 97.92% and 91.67% for calibration and validation data set, respectively, while accuracies of the best model for samples of the mixed kernels were 95.83% and 84.38%. Moreover, five wavelengths (1145, 1408, 1935, 2103, and 2383 nm were selected as the key

  13. Implementation of pencil kernel and depth penetration algorithms for treatment planning of proton beams

    International Nuclear Information System (INIS)

    Russell, K.R.; Saxner, M.; Ahnesjoe, A.; Montelius, A.; Grusell, E.; Dahlgren, C.V.

    2000-01-01

    The implementation of two algorithms for calculating dose distributions for radiation therapy treatment planning of intermediate energy proton beams is described. A pencil kernel algorithm and a depth penetration algorithm have been incorporated into a commercial three-dimensional treatment planning system (Helax-TMS, Helax AB, Sweden) to allow conformal planning techniques using irregularly shaped fields, proton range modulation, range modification and dose calculation for non-coplanar beams. The pencil kernel algorithm is developed from the Fermi-Eyges formalism and Moliere multiple-scattering theory with range straggling corrections applied. The depth penetration algorithm is based on the energy loss in the continuous slowing down approximation with simple correction factors applied to the beam penumbra region and has been implemented for fast, interactive treatment planning. Modelling of the effects of air gaps and range modifying device thickness and position are implicit to both algorithms. Measured and calculated dose values are compared for a therapeutic proton beam in both homogeneous and heterogeneous phantoms of varying complexity. Both algorithms model the beam penumbra as a function of depth in a homogeneous phantom with acceptable accuracy. Results show that the pencil kernel algorithm is required for modelling the dose perturbation effects from scattering in heterogeneous media. (author)

  14. Exact Heat Kernel on a Hypersphere and Its Applications in Kernel SVM

    Directory of Open Access Journals (Sweden)

    Chenchao Zhao

    2018-01-01

    Full Text Available Many contemporary statistical learning methods assume a Euclidean feature space. This paper presents a method for defining similarity based on hyperspherical geometry and shows that it often improves the performance of support vector machine compared to other competing similarity measures. Specifically, the idea of using heat diffusion on a hypersphere to measure similarity has been previously proposed and tested by Lafferty and Lebanon [1], demonstrating promising results based on a heuristic heat kernel obtained from the zeroth order parametrix expansion; however, how well this heuristic kernel agrees with the exact hyperspherical heat kernel remains unknown. This paper presents a higher order parametrix expansion of the heat kernel on a unit hypersphere and discusses several problems associated with this expansion method. We then compare the heuristic kernel with an exact form of the heat kernel expressed in terms of a uniformly and absolutely convergent series in high-dimensional angular momentum eigenmodes. Being a natural measure of similarity between sample points dwelling on a hypersphere, the exact kernel often shows superior performance in kernel SVM classifications applied to text mining, tumor somatic mutation imputation, and stock market analysis.

  15. X-ray coherent scattering tomography of textured material (Conference Presentation)

    Science.gov (United States)

    Zhu, Zheyuan; Pang, Shuo

    2017-05-01

    Small-angle X-ray scattering (SAXS) measures the signature of angular-dependent coherently scattered X-rays, which contains richer information in material composition and structure compared to conventional absorption-based computed tomography. SAXS image reconstruction method of a 2 or 3 dimensional object based on computed tomography, termed as coherent scattering computed tomography (CSCT), enables the detection of spatially-resolved, material-specific isotropic scattering signature inside an extended object, and provides improved contrast for medical diagnosis, security screening, and material characterization applications. However, traditional CSCT methods assumes materials are fine powders or amorphous, and possess isotropic scattering profiles, which is not generally true for all materials. Anisotropic scatters cannot be captured using conventional CSCT method and result in reconstruction errors. To obtain correct information from the sample, we designed new imaging strategy which incorporates extra degree of detector motion into X-ray scattering tomography for the detection of anisotropic scattered photons from a series of two-dimensional intensity measurements. Using a table-top, narrow-band X-ray source and a panel detector, we demonstrate the anisotropic scattering profile captured from an extended object and the reconstruction of a three-dimensional object. For materials possessing a well-organized crystalline structure with certain symmetry, the scatter texture is more predictable. We will also discuss the compressive schemes and implementation of data acquisition to improve the collection efficiency and accelerate the imaging process.

  16. THERMAL: A routine designed to calculate neutron thermal scattering

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1995-01-01

    THERMAL is designed to calculate neutron thermal scattering that is isotropic in the center of mass system. At low energy thermal motion will be included. At high energies the target nuclei are assumed to be stationary. The point of transition between low and high energies has been defined to insure a smooth transition. It is assumed that at low energy the elastic cross section is constant in the center of mass system. At high energy the cross section can be of any form. You can use this routine for all energies where the elastic scattering is isotropic in the center of mass system. In most materials this will be a fairly high energy

  17. Concentric layered Hermite scatterers

    Science.gov (United States)

    Astheimer, Jeffrey P.; Parker, Kevin J.

    2018-05-01

    The long wavelength limit of scattering from spheres has a rich history in optics, electromagnetics, and acoustics. Recently it was shown that a common integral kernel pertains to formulations of weak spherical scatterers in both acoustics and electromagnetic regimes. Furthermore, the relationship between backscattered amplitude and wavenumber k was shown to follow power laws higher than the Rayleigh scattering k2 power law, when the inhomogeneity had a material composition that conformed to a Gaussian weighted Hermite polynomial. Although this class of scatterers, called Hermite scatterers, are plausible, it may be simpler to manufacture scatterers with a core surrounded by one or more layers. In this case the inhomogeneous material property conforms to a piecewise continuous constant function. We demonstrate that the necessary and sufficient conditions for supra-Rayleigh scattering power laws in this case can be stated simply by considering moments of the inhomogeneous function and its spatial transform. This development opens an additional path for construction of, and use of scatterers with unique power law behavior.

  18. Neutron cross sections of cryogenic materials: a synthetic kernel for molecular solids

    International Nuclear Information System (INIS)

    Granada, J.R.; Gillette, V.H.; Petriw, S.; Cantargi, F.; Pepe, M.E.; Sbaffoni, M.M.

    2004-01-01

    A new synthetic scattering function aimed at the description of the interaction of thermal neutrons with molecular solids has been developed. At low incident neutron energies, both lattice modes and molecular rotations are specifically accounted for, through an expansion of the scattering law in few phonon terms. Simple representations of the molecular dynamical modes are used, in order to produce a fairly accurate description of neutron scattering kernels and cross sections with a minimum set of input data. As the neutron energies become much larger than that corresponding to the characteristic Debye temperature and to the rotational energies of the molecular solid, the 'phonon formulation' transforms into the traditional description for molecular gases. (orig.)

  19. Steady- and transient-state analysis of fully ceramic microencapsulated fuel with randomly dispersed tristructural isotropic particles via two-temperature homogenized model-I: Theory and method

    International Nuclear Information System (INIS)

    Lee, Yoon Hee; Cho, Bum Hee; Cho, Nam Zin

    2016-01-01

    As a type of accident-tolerant fuel, fully ceramic microencapsulated (FCM) fuel was proposed after the Fukushima accident in Japan. The FCM fuel consists of tristructural isotropic particles randomly dispersed in a silicon carbide (SiC) matrix. For a fuel element with such high heterogeneity, we have proposed a two-temperature homogenized model using the particle transport Monte Carlo method for the heat conduction problem. This model distinguishes between fuel-kernel and SiC matrix temperatures. Moreover, the obtained temperature profiles are more realistic than those of other models. In Part I of the paper, homogenized parameters for the FCM fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure are obtained by (1) matching steady-state analytic solutions of the model with the results of particle transport Monte Carlo method for heat conduction problems, and (2) preserving total enthalpies in fuel kernels and SiC matrix. The homogenized parameters have two desirable properties: (1) they are insensitive to boundary conditions such as coolant bulk temperatures and thickness of cladding, and (2) they are independent of operating power density. By performing the Monte Carlo calculations with the temperature-dependent thermal properties of the constituent materials of the FCM fuel, temperature-dependent homogenized parameters are obtained

  20. Evaluation of sintering effects on SiC-incorporated UO{sub 2} kernels under Ar and Ar–4%H{sub 2} environments

    Energy Technology Data Exchange (ETDEWEB)

    Silva, Chinthaka M., E-mail: silvagw@ornl.gov [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee TN 37831-6223 (United States); Materials Science and Engineering, The University of Tennessee Knoxville, TN 37996-2100, United States. (United States); Lindemer, Terrence B.; Hunt, Rodney D.; Collins, Jack L.; Terrani, Kurt A.; Snead, Lance L. [Oak Ridge National Laboratory, P.O. Box 2008, Oak Ridge, Tennessee TN 37831-6223 (United States)

    2013-11-15

    Silicon carbide (SiC) is suggested as an oxygen getter in UO{sub 2} kernels used for tristructural isotropic (TRISO) particle fuels and to prevent kernel migration during irradiation. Scanning electron microscopy and X-ray diffractometry analyses performed on sintered kernels verified that an internal gelation process can be used to incorporate SiC in UO{sub 2} fuel kernels. Even though the presence of UC in either argon (Ar) or Ar–4%H{sub 2} sintered samples suggested a lowering of the SiC up to 3.5–1.4 mol%, respectively, the presence of other silicon-related chemical phases indicates the preservation of silicon in the kernels during sintering process. UC formation was presumed to occur by two reactions. The first was by the reaction of SiC with its protective SiO{sub 2} oxide layer on SiC grains to produce volatile SiO and free carbon that subsequently reacted with UO{sub 2} to form UC. The second process was direct UO{sub 2} reaction with SiC grains to form SiO, CO, and UC. A slightly higher density and UC content were observed in the sample sintered in Ar–4%H{sub 2}, but both atmospheres produced kernels with ∼95% of theoretical density. It is suggested that incorporating CO in the sintering gas could prevent UC formation and preserve the initial SiC content.

  1. Integral equations with difference kernels on finite intervals

    CERN Document Server

    Sakhnovich, Lev A

    2015-01-01

    This book focuses on solving integral equations with difference kernels on finite intervals. The corresponding problem on the semiaxis was previously solved by N. Wiener–E. Hopf and by M.G. Krein. The problem on finite intervals, though significantly more difficult, may be solved using our method of operator identities. This method is also actively employed in inverse spectral problems, operator factorization and nonlinear integral equations. Applications of the obtained results to optimal synthesis, light scattering, diffraction, and hydrodynamics problems are discussed in this book, which also describes how the theory of operators with difference kernels is applied to stable processes and used to solve the famous M. Kac problems on stable processes. In this second edition these results are extensively generalized and include the case of all Levy processes. We present the convolution expression for the well-known Ito formula of the generator operator, a convolution expression that has proven to be fruitful...

  2. Subsampling Realised Kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, Peter Reinhard; Lunde, Asger

    2011-01-01

    In a recent paper we have introduced the class of realised kernel estimators of the increments of quadratic variation in the presence of noise. We showed that this estimator is consistent and derived its limit distribution under various assumptions on the kernel weights. In this paper we extend our...... that subsampling is impotent, in the sense that subsampling has no effect on the asymptotic distribution. Perhaps surprisingly, for the efficient smooth kernels, such as the Parzen kernel, we show that subsampling is harmful as it increases the asymptotic variance. We also study the performance of subsampled...

  3. Rayleigh scattering under light-atom coherent interaction

    OpenAIRE

    Takamizawa, Akifumi; Shimoda, Koichi

    2012-01-01

    Semi-classical calculation of an oscillating dipole induced in a two-level atom indicates that spherical radiation from the dipole under coherent interaction, i.e., Rayleigh scattering, has a power level comparable to that of spontaneous emission resulting from an incoherent process. Whereas spontaneous emission is nearly isotropic and has random polarization generally, Rayleigh scattering is strongly anisotropic and polarized in association with incident light. In the case where Rabi frequen...

  4. Fermionic NNLO contributions to Bhabha scattering

    International Nuclear Information System (INIS)

    Actis, S.; Riemann, T.; Czakon, M.; Uniwersytet Slaski, Katowice; Gluza, J.

    2007-10-01

    We derive the two-loop corrections to Bhabha scattering from heavy fermions using dispersion relations. The double-box contributions are expressed by three kernel functions. Convoluting the perturbative kernels with fermionic threshold functions or with hadronic data allows to determine numerical results for small electron mass m e , combined with arbitrary values of the fermion mass m f in the loop, m 2 e 2 f , or with hadronic insertions. We present numerical results for m f =m μ , m τ ,m top at typical small- and large-angle kinematics ranging from 1 GeV to 500 GeV. (orig.)

  5. Steady- and Transient-State Analyses of Fully Ceramic Microencapsulated Fuel with Randomly Dispersed Tristructural Isotropic Particles via Two-Temperature Homogenized Model—I: Theory and Method

    Directory of Open Access Journals (Sweden)

    Yoonhee Lee

    2016-06-01

    Full Text Available As a type of accident-tolerant fuel, fully ceramic microencapsulated (FCM fuel was proposed after the Fukushima accident in Japan. The FCM fuel consists of tristructural isotropic particles randomly dispersed in a silicon carbide (SiC matrix. For a fuel element with such high heterogeneity, we have proposed a two-temperature homogenized model using the particle transport Monte Carlo method for the heat conduction problem. This model distinguishes between fuel-kernel and SiC matrix temperatures. Moreover, the obtained temperature profiles are more realistic than those of other models. In Part I of the paper, homogenized parameters for the FCM fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure are obtained by (1 matching steady-state analytic solutions of the model with the results of particle transport Monte Carlo method for heat conduction problems, and (2 preserving total enthalpies in fuel kernels and SiC matrix. The homogenized parameters have two desirable properties: (1 they are insensitive to boundary conditions such as coolant bulk temperatures and thickness of cladding, and (2 they are independent of operating power density. By performing the Monte Carlo calculations with the temperature-dependent thermal properties of the constituent materials of the FCM fuel, temperature-dependent homogenized parameters are obtained.

  6. Kernel abortion in maize. II. Distribution of 14C among kernel carboydrates

    International Nuclear Information System (INIS)

    Hanft, J.M.; Jones, R.J.

    1986-01-01

    This study was designed to compare the uptake and distribution of 14 C among fructose, glucose, sucrose, and starch in the cob, pedicel, and endosperm tissues of maize (Zea mays L.) kernels induced to abort by high temperature with those that develop normally. Kernels cultured in vitro at 309 and 35 0 C were transferred to [ 14 C]sucrose media 10 days after pollination. Kernels cultured at 35 0 C aborted prior to the onset of linear dry matter accumulation. Significant uptake into the cob, pedicel, and endosperm of radioactivity associated with the soluble and starch fractions of the tissues was detected after 24 hours in culture on atlageled media. After 8 days in culture on [ 14 C]sucrose media, 48 and 40% of the radioactivity associated with the cob carbohydrates was found in the reducing sugars at 30 and 35 0 C, respectively. Of the total carbohydrates, a higher percentage of label was associated with sucrose and lower percentage with fructose and glucose in pedicel tissue of kernels cultured at 35 0 C compared to kernels cultured at 30 0 C. These results indicate that sucrose was not cleaved to fructose and glucose as rapidly during the unloading process in the pedicel of kernels induced to abort by high temperature. Kernels cultured at 35 0 C had a much lower proportion of label associated with endosperm starch (29%) than did kernels cultured at 30 0 C (89%). Kernels cultured at 35 0 C had a correspondingly higher proportion of 14 C in endosperm fructose, glucose, and sucrose

  7. UN Method For The Critical Slab Problem In One-Speed Neutron Transport Theory

    International Nuclear Information System (INIS)

    Oeztuerk, Hakan; Guengoer, Sueleyman

    2008-01-01

    The Chebyshev polynomial approximation (U N method) is used to solve the critical slab problem in one-speed neutron transport theory using Marshak boundary condition. The isotropic scattering kernel with the combination of forward and backward scattering is chosen for the neutrons in a uniform finite slab. Numerical results obtained by the U N method are presented in the tables together with the results obtained by the well-known P N method for comparison. It is shown that the method converges rapidly with its easily executable equations.

  8. Optimized Kernel Entropy Components.

    Science.gov (United States)

    Izquierdo-Verdiguier, Emma; Laparra, Valero; Jenssen, Robert; Gomez-Chova, Luis; Camps-Valls, Gustau

    2017-06-01

    This brief addresses two main issues of the standard kernel entropy component analysis (KECA) algorithm: the optimization of the kernel decomposition and the optimization of the Gaussian kernel parameter. KECA roughly reduces to a sorting of the importance of kernel eigenvectors by entropy instead of variance, as in the kernel principal components analysis. In this brief, we propose an extension of the KECA method, named optimized KECA (OKECA), that directly extracts the optimal features retaining most of the data entropy by means of compacting the information in very few features (often in just one or two). The proposed method produces features which have higher expressive power. In particular, it is based on the independent component analysis framework, and introduces an extra rotation to the eigen decomposition, which is optimized via gradient-ascent search. This maximum entropy preservation suggests that OKECA features are more efficient than KECA features for density estimation. In addition, a critical issue in both the methods is the selection of the kernel parameter, since it critically affects the resulting performance. Here, we analyze the most common kernel length-scale selection criteria. The results of both the methods are illustrated in different synthetic and real problems. Results show that OKECA returns projections with more expressive power than KECA, the most successful rule for estimating the kernel parameter is based on maximum likelihood, and OKECA is more robust to the selection of the length-scale parameter in kernel density estimation.

  9. Steady- and transient-state analysis of fully ceramic microencapsulated fuel with randomly dispersed tristructural isotropic particles via two-temperature homogenized model-II: Applications by coupling with COREDAX

    International Nuclear Information System (INIS)

    Lee, Yoon Hee; Cho, Bum Hee; Cho, Nam Zin

    2016-01-01

    In Part I of this paper, the two-temperature homogenized model for the fully ceramic microencapsulated fuel, in which tristructural isotropic particles are randomly dispersed in a fine lattice stochastic structure, was discussed. In this model, the fuel-kernel and silicon carbide matrix temperatures are distinguished. Moreover, the obtained temperature profiles are more realistic than those obtained using other models. Using the temperature-dependent thermal conductivities of uranium nitride and the silicon carbide matrix, temperature-dependent homogenized parameters were obtained. In Part II of the paper, coupled with the COREDAX code, a reactor core loaded by fully ceramic microencapsulated fuel in which tristructural isotropic particles are randomly dispersed in the fine lattice stochastic structure is analyzed via a two-temperature homogenized model at steady and transient states. The results are compared with those from harmonic- and volumetric-average thermal conductivity models; i.e., we compare keff eigenvalues, power distributions, and temperature profiles in the hottest single channel at a steady state. At transient states, we compare total power, average energy deposition, and maximum temperatures in the hottest single channel obtained by the different thermal analysis models. The different thermal analysis models and the availability of fuel-kernel temperatures in the two-temperature homogenized model for Doppler temperature feedback lead to significant differences

  10. Isotropic oscillator: spheroidal wave functions

    International Nuclear Information System (INIS)

    Mardoyan, L.G.; Pogosyan, G.S.; Ter-Antonyan, V.M.; Sisakyan, A.N.

    1985-01-01

    Solutions of the Schroedinger equation are found for an isotropic oscillator (10) in prolate and oblate spheroidal coordinates. It is shown that the obtained solutions turn into spherical and cylindrical bases of the isotropic oscillator at R→0 and R→ infinity (R is the dimensional parameter entering into the definition of prolate and oblate spheroidal coordinates). The explicit form is given for both prolate and oblate basis of the isotropic oscillator for the lowest quantum states

  11. Electron Cooling and Isotropization during Magnetotail Current Sheet Thinning: Implications for Parallel Electric Fields

    Science.gov (United States)

    Lu, San; Artemyev, A. V.; Angelopoulos, V.

    2017-11-01

    Magnetotail current sheet thinning is a distinctive feature of substorm growth phase, during which magnetic energy is stored in the magnetospheric lobes. Investigation of charged particle dynamics in such thinning current sheets is believed to be important for understanding the substorm energy storage and the current sheet destabilization responsible for substorm expansion phase onset. We use Time History of Events and Macroscale Interactions during Substorms (THEMIS) B and C observations in 2008 and 2009 at 18 - 25 RE to show that during magnetotail current sheet thinning, the electron temperature decreases (cooling), and the parallel temperature decreases faster than the perpendicular temperature, leading to a decrease of the initially strong electron temperature anisotropy (isotropization). This isotropization cannot be explained by pure adiabatic cooling or by pitch angle scattering. We use test particle simulations to explore the mechanism responsible for the cooling and isotropization. We find that during the thinning, a fast decrease of a parallel electric field (directed toward the Earth) can speed up the electron parallel cooling, causing it to exceed the rate of perpendicular cooling, and thus lead to isotropization, consistent with observation. If the parallel electric field is too small or does not change fast enough, the electron parallel cooling is slower than the perpendicular cooling, so the parallel electron anisotropy grows, contrary to observation. The same isotropization can also be accomplished by an increasing parallel electric field directed toward the equatorial plane. Our study reveals the existence of a large-scale parallel electric field, which plays an important role in magnetotail particle dynamics during the current sheet thinning process.

  12. Formal solutions of inverse scattering problems. III

    International Nuclear Information System (INIS)

    Prosser, R.T.

    1980-01-01

    The formal solutions of certain three-dimensional inverse scattering problems presented in papers I and II of this series [J. Math. Phys. 10, 1819 (1969); 17 1175 (1976)] are obtained here as fixed points of a certain nonlinear mapping acting on a suitable Banach space of integral kernels. When the scattering data are sufficiently restricted, this mapping is shown to be a contraction, thereby establishing the existence, uniqueness, and continuous dependence on the data of these formal solutions

  13. A novel adaptive kernel method with kernel centers determined by a support vector regression approach

    NARCIS (Netherlands)

    Sun, L.G.; De Visser, C.C.; Chu, Q.P.; Mulder, J.A.

    2012-01-01

    The optimality of the kernel number and kernel centers plays a significant role in determining the approximation power of nearly all kernel methods. However, the process of choosing optimal kernels is always formulated as a global optimization task, which is hard to accomplish. Recently, an

  14. Isotropic and anisotropic surface wave cloaking techniques

    International Nuclear Information System (INIS)

    McManus, T M; Spada, L La; Hao, Y

    2016-01-01

    In this paper we compare two different approaches for surface waves cloaking. The first technique is a unique application of Fermat’s principle and requires isotropic material properties, but owing to its derivation is limited in its applicability. The second technique utilises a geometrical optics approximation for dealing with rays bound to a two dimensional surface and requires anisotropic material properties, though it can be used to cloak any smooth surface. We analytically derive the surface wave scattering behaviour for both cloak techniques when applied to a rotationally symmetric surface deformation. Furthermore, we simulate both using a commercially available full-wave electromagnetic solver and demonstrate a good level of agreement with their analytically derived solutions. Our analytical solutions and simulations provide a complete and concise overview of two different surface wave cloaking techniques. (paper)

  15. Isotropic and anisotropic surface wave cloaking techniques

    Science.gov (United States)

    McManus, T. M.; La Spada, L.; Hao, Y.

    2016-04-01

    In this paper we compare two different approaches for surface waves cloaking. The first technique is a unique application of Fermat’s principle and requires isotropic material properties, but owing to its derivation is limited in its applicability. The second technique utilises a geometrical optics approximation for dealing with rays bound to a two dimensional surface and requires anisotropic material properties, though it can be used to cloak any smooth surface. We analytically derive the surface wave scattering behaviour for both cloak techniques when applied to a rotationally symmetric surface deformation. Furthermore, we simulate both using a commercially available full-wave electromagnetic solver and demonstrate a good level of agreement with their analytically derived solutions. Our analytical solutions and simulations provide a complete and concise overview of two different surface wave cloaking techniques.

  16. Protein Subcellular Localization with Gaussian Kernel Discriminant Analysis and Its Kernel Parameter Selection.

    Science.gov (United States)

    Wang, Shunfang; Nie, Bing; Yue, Kun; Fei, Yu; Li, Wenjia; Xu, Dongshu

    2017-12-15

    Kernel discriminant analysis (KDA) is a dimension reduction and classification algorithm based on nonlinear kernel trick, which can be novelly used to treat high-dimensional and complex biological data before undergoing classification processes such as protein subcellular localization. Kernel parameters make a great impact on the performance of the KDA model. Specifically, for KDA with the popular Gaussian kernel, to select the scale parameter is still a challenging problem. Thus, this paper introduces the KDA method and proposes a new method for Gaussian kernel parameter selection depending on the fact that the differences between reconstruction errors of edge normal samples and those of interior normal samples should be maximized for certain suitable kernel parameters. Experiments with various standard data sets of protein subcellular localization show that the overall accuracy of protein classification prediction with KDA is much higher than that without KDA. Meanwhile, the kernel parameter of KDA has a great impact on the efficiency, and the proposed method can produce an optimum parameter, which makes the new algorithm not only perform as effectively as the traditional ones, but also reduce the computational time and thus improve efficiency.

  17. Neutron Scattering in Hydrogenous Moderators, Studied by Time Dependent Reaction Rate Method

    Energy Technology Data Exchange (ETDEWEB)

    Larsson, L G; Moeller, E; Purohit, S N

    1966-03-15

    The moderation and absorption of a neutron burst in water, poisoned with the non-1/v absorbers cadmium and gadolinium, has been followed on the time scale by multigroup calculations, using scattering kernels for the proton gas and the Nelkin model. The time dependent reaction rate curves for each absorber display clear differences for the two models, and the separation between the curves does not depend much on the absorber concentration. An experimental method for the measurement of infinite medium reaction rate curves in a limited geometry has been investigated. This method makes the measurement of the time dependent reaction rate generally useful for thermalization studies in a small geometry of a liquid hydrogenous moderator, provided that the experiment is coupled to programs for the calculation of scattering kernels and time dependent neutron spectra. Good agreement has been found between the reaction rate curve, measured with cadmium in water, and a calculated curve, where the Haywood kernel has been used.

  18. Unified connected theory of few-body reaction mechanisms in N-body scattering theory

    Science.gov (United States)

    Polyzou, W. N.; Redish, E. F.

    1978-01-01

    A unified treatment of different reaction mechanisms in nonrelativistic N-body scattering is presented. The theory is based on connected kernel integral equations that are expected to become compact for reasonable constraints on the potentials. The operators T/sub +-//sup ab/(A) are approximate transition operators that describe the scattering proceeding through an arbitrary reaction mechanism A. These operators are uniquely determined by a connected kernel equation and satisfy an optical theorem consistent with the choice of reaction mechanism. Connected kernel equations relating T/sub +-//sup ab/(A) to the full T/sub +-//sup ab/ allow correction of the approximate solutions for any ignored process to any order. This theory gives a unified treatment of all few-body reaction mechanisms with the same dynamic simplicity of a model calculation, but can include complicated reaction mechanisms involving overlapping configurations where it is difficult to formulate models.

  19. 7 CFR 981.7 - Edible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Edible kernel. 981.7 Section 981.7 Agriculture... Regulating Handling Definitions § 981.7 Edible kernel. Edible kernel means a kernel, piece, or particle of almond kernel that is not inedible. [41 FR 26852, June 30, 1976] ...

  20. In-Situ Characterization of Isotropic and Transversely Isotropic Elastic Properties Using Ultrasonic Wave Velocities

    NARCIS (Netherlands)

    Pant, S; Laliberte, J; Martinez, M.J.; Rocha, B.

    2016-01-01

    In this paper, a one-sided, in situ method based on the time of flight measurement of ultrasonic waves was described. The primary application of this technique was to non-destructively measure the stiffness properties of isotropic and transversely isotropic materials. The method consists of

  1. Kernel versions of some orthogonal transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    Kernel versions of orthogonal transformations such as principal components are based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced...... by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution also known as the kernel trick these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel...... function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component analysis (PCA) and kernel minimum noise fraction (MNF) analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function...

  2. Model Selection in Kernel Ridge Regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    Kernel ridge regression is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts. This paper investigates the influence of the choice of kernel and the setting of tuning parameters on forecast accuracy. We review several popular kernels......, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. We interpret the latter two kernels in terms of their smoothing properties, and we relate the tuning parameters associated to all these kernels to smoothness measures of the prediction function and to the signal-to-noise ratio. Based...... on these interpretations, we provide guidelines for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study confirms the practical usefulness of these rules of thumb. Finally, the flexible and smooth functional forms provided by the Gaussian and Sinc kernels makes them widely...

  3. Macroscopic simulation of isotropic permanent magnets

    International Nuclear Information System (INIS)

    Bruckner, Florian; Abert, Claas; Vogler, Christoph; Heinrichs, Frank; Satz, Armin; Ausserlechner, Udo; Binder, Gernot; Koeck, Helmut; Suess, Dieter

    2016-01-01

    Accurate simulations of isotropic permanent magnets require to take the magnetization process into account and consider the anisotropic, nonlinear, and hysteretic material behaviour near the saturation configuration. An efficient method for the solution of the magnetostatic Maxwell equations including the description of isotropic permanent magnets is presented. The algorithm can easily be implemented on top of existing finite element methods and does not require a full characterization of the hysteresis of the magnetic material. Strayfield measurements of an isotropic permanent magnet and simulation results are in good agreement and highlight the importance of a proper description of the isotropic material. - Highlights: • Simulations of isotropic permanent magnets. • Accurate calculation of remanence magnetization and strayfield. • Comparison with strayfield measurements and anisotropic magnet simulations. • Efficient 3D FEM–BEM coupling for solution of Maxwell equations.

  4. Penetuan Bilangan Iodin pada Hydrogenated Palm Kernel Oil (HPKO) dan Refined Bleached Deodorized Palm Kernel Oil (RBDPKO)

    OpenAIRE

    Sitompul, Monica Angelina

    2015-01-01

    Have been conducted Determination of Iodin Value by method titration to some Hydrogenated Palm Kernel Oil (HPKO) and Refined Bleached Deodorized Palm Kernel Oil (RBDPKO). The result of analysis obtained the Iodin Value in Hydrogenated Palm Kernel Oil (A) = 0,16 gr I2/100gr, Hydrogenated Palm Kernel Oil (B) = 0,20 gr I2/100gr, Hydrogenated Palm Kernel Oil (C) = 0,24 gr I2/100gr. And in Refined Bleached Deodorized Palm Kernel Oil (A) = 17,51 gr I2/100gr, Refined Bleached Deodorized Palm Kernel ...

  5. Calculation of point isotropic buildup factors of gamma rays for water and lead

    Directory of Open Access Journals (Sweden)

    A. S. H.

    2001-12-01

    Full Text Available   Exposure buildup factors for water and lead have been calculated by the Monte-Carlo method for an isotropic point source in an infinite homogeneous medium, using the latest cross secions available on the Internet. The types of interactions considered are ,photoelectric effect, incoherent (or bound-electron Compton. Scattering, coherent (or Rayleigh scattering and pair production. Fluorescence radiations have also been taken into acount for lead. For each material, calculations were made at 10 gamma ray energies in the 40 keV to 10 MeV range and up to penetration depths of 10 mean free paths at each energy point. The results presented in this paper can be considered as modified gamma ray exposure buildup factors and be used in radiation shielding designs.

  6. A new treatment of nonlocality in scattering process

    Science.gov (United States)

    Upadhyay, N. J.; Bhagwat, A.; Jain, B. K.

    2018-01-01

    Nonlocality in the scattering potential leads to an integro-differential equation. In this equation nonlocality enters through an integral over the nonlocal potential kernel. The resulting Schrödinger equation is usually handled by approximating r,{r}{\\prime }-dependence of the nonlocal kernel. The present work proposes a novel method to solve the integro-differential equation. The method, using the mean value theorem of integral calculus, converts the nonhomogeneous term to a homogeneous term. The effective local potential in this equation turns out to be energy independent, but has relative angular momentum dependence. This method is accurate and valid for any form of nonlocality. As illustrative examples, the total and differential cross sections for neutron scattering off 12C, 56Fe and 100Mo nuclei are calculated with this method in the low energy region (up to 10 MeV) and are found to be in reasonable accord with the experiments.

  7. 7 CFR 981.8 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.8 Section 981.8 Agriculture... Regulating Handling Definitions § 981.8 Inedible kernel. Inedible kernel means a kernel, piece, or particle of almond kernel with any defect scored as serious damage, or damage due to mold, gum, shrivel, or...

  8. The construction of a two-dimensional reproducing kernel function and its application in a biomedical model.

    Science.gov (United States)

    Guo, Qi; Shen, Shu-Ting

    2016-04-29

    There are two major classes of cardiac tissue models: the ionic model and the FitzHugh-Nagumo model. During computer simulation, each model entails solving a system of complex ordinary differential equations and a partial differential equation with non-flux boundary conditions. The reproducing kernel method possesses significant applications in solving partial differential equations. The derivative of the reproducing kernel function is a wavelet function, which has local properties and sensitivities to singularity. Therefore, study on the application of reproducing kernel would be advantageous. Applying new mathematical theory to the numerical solution of the ventricular muscle model so as to improve its precision in comparison with other methods at present. A two-dimensional reproducing kernel function inspace is constructed and applied in computing the solution of two-dimensional cardiac tissue model by means of the difference method through time and the reproducing kernel method through space. Compared with other methods, this method holds several advantages such as high accuracy in computing solutions, insensitivity to different time steps and a slow propagation speed of error. It is suitable for disorderly scattered node systems without meshing, and can arbitrarily change the location and density of the solution on different time layers. The reproducing kernel method has higher solution accuracy and stability in the solutions of the two-dimensional cardiac tissue model.

  9. Irreducible kernels and nonperturbative expansions in a theory with pure m -> m interaction

    International Nuclear Information System (INIS)

    Iagolnitzer, D.

    1983-01-01

    Recent results on the structure of the S matrix at the m-particle threshold (m>=2) in a simplified m->m scattering theory with no subchannel interaction are extended to the Green function F on the basis of off-shell unitarity, through an adequate mathematical extension of some results of Fredholm theory: local two-sheeted or infinite-sheeted structure of F around s=(mμ) 2 depending on the parity of (m-1) (ν-1) (where μ>0 is the mass and ν is the dimension of space-time), off-shell definition of the irreducible kernel U [which is the analogue of the K matrix in the two different parity cases (m-1)(ν-1) odd or even] and related local expansion of F, for (m-1)(ν-1) even, in powers of sigmasup(β)lnsigma(sigma=(mμ) 2 -s). It is shown that each term in this expansion is the dominant contribution to a Feynman-type integral in which each vertex is a kernel U. The links between kernel U and Bethe-Salpeter type kernels G of the theory are exhibited in both parity cases, as also the links between the above expansion of F and local expansions, in the Bethe-Salpeter type framework, of Fsub(lambda) in terms of Feynman-type integrals in which each vertex is a kernel G and which include both dominant and subdominant contributions. (orig.)

  10. Mapping of moveout in tilted transversely isotropic media

    KAUST Repository

    Stovas, A.; Alkhalifah, Tariq Ali

    2013-01-01

    The computation of traveltimes in a transverse isotropic medium with a tilted symmetry axis tilted transversely isotropic is very important both for modelling and inversion. We develop a simple analytical procedure to map the traveltime function from a transverse isotropic medium with a vertical symmetry axis (vertical transversely isotropic) to a tilted transversely isotropic medium by applying point-by-point mapping of the traveltime function. This approach can be used for kinematic modelling and inversion in layered tilted transversely isotropic media. © 2013 European Association of Geoscientists & Engineers.

  11. Mapping of moveout in tilted transversely isotropic media

    KAUST Repository

    Stovas, A.

    2013-09-09

    The computation of traveltimes in a transverse isotropic medium with a tilted symmetry axis tilted transversely isotropic is very important both for modelling and inversion. We develop a simple analytical procedure to map the traveltime function from a transverse isotropic medium with a vertical symmetry axis (vertical transversely isotropic) to a tilted transversely isotropic medium by applying point-by-point mapping of the traveltime function. This approach can be used for kinematic modelling and inversion in layered tilted transversely isotropic media. © 2013 European Association of Geoscientists & Engineers.

  12. 7 CFR 981.408 - Inedible kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Inedible kernel. 981.408 Section 981.408 Agriculture... Administrative Rules and Regulations § 981.408 Inedible kernel. Pursuant to § 981.8, the definition of inedible kernel is modified to mean a kernel, piece, or particle of almond kernel with any defect scored as...

  13. About statistical process contribution to elastic diffraction scattering

    International Nuclear Information System (INIS)

    Ismanov, E.I.; Dzhuraev, Sh. Kh.; Paluanov, B.K.

    1999-01-01

    The experimental data on angular distribution show two basic properties. The first one is the presence of back and front peaks. The second one is the angular isotropic distribution near 90 degree, and has a big energy dependence. Different models for partial amplitudes a dl of the diffraction statistical scattering, particularly the model with Gaussian and exponential density distribution, were considered. The experimental data on pp-scattering were analyzed using the examined models

  14. The Galactic Isotropic γ-ray Background and Implications for Dark Matter

    Science.gov (United States)

    Campbell, Sheldon S.; Kwa, Anna; Kaplinghat, Manoj

    2018-06-01

    We present an analysis of the radial angular profile of the galacto-isotropic (GI) γ-ray flux-the statistically uniform flux in angular annuli centred on the Galactic centre. Two different approaches are used to measure the GI flux profile in 85 months of Fermi-LAT data: the BDS statistical method which identifies spatial correlations, and a new Poisson ordered-pixel method which identifies non-Poisson contributions. Both methods produce similar GI flux profiles. The GI flux profile is well-described by an existing model of bremsstrahlung, π0 production, inverse Compton scattering, and the isotropic background. Discrepancies with data in our full-sky model are not present in the GI component, and are therefore due to mis-modelling of the non-GI emission. Dark matter annihilation constraints based solely on the observed GI profile are close to the thermal WIMP cross section below 100 GeV, for fixed models of the dark matter density profile and astrophysical γ-ray foregrounds. Refined measurements of the GI profile are expected to improve these constraints by a factor of a few.

  15. Surface-enhanced Raman imaging of cell membrane by a highly homogeneous and isotropic silver nanostructure

    Science.gov (United States)

    Zito, Gianluigi; Rusciano, Giulia; Pesce, Giuseppe; Dochshanov, Alden; Sasso, Antonio

    2015-04-01

    Label-free chemical imaging of live cell membranes can shed light on the molecular basis of cell membrane functionalities and their alterations under membrane-related diseases. In principle, this can be done by surface-enhanced Raman scattering (SERS) in confocal microscopy, but requires engineering plasmonic architectures with a spatially invariant SERS enhancement factor G(x, y) = G. To this end, we exploit a self-assembled isotropic nanostructure with characteristics of homogeneity typical of the so-called near-hyperuniform disorder. The resulting highly dense, homogeneous and isotropic random pattern consists of clusters of silver nanoparticles with limited size dispersion. This nanostructure brings together several advantages: very large hot spot density (~104 μm-2), superior spatial reproducibility (SD nanotoxicity issues. See DOI: 10.1039/c5nr01341k

  16. Model selection in kernel ridge regression

    DEFF Research Database (Denmark)

    Exterkate, Peter

    2013-01-01

    Kernel ridge regression is a technique to perform ridge regression with a potentially infinite number of nonlinear transformations of the independent variables as regressors. This method is gaining popularity as a data-rich nonlinear forecasting tool, which is applicable in many different contexts....... The influence of the choice of kernel and the setting of tuning parameters on forecast accuracy is investigated. Several popular kernels are reviewed, including polynomial kernels, the Gaussian kernel, and the Sinc kernel. The latter two kernels are interpreted in terms of their smoothing properties......, and the tuning parameters associated to all these kernels are related to smoothness measures of the prediction function and to the signal-to-noise ratio. Based on these interpretations, guidelines are provided for selecting the tuning parameters from small grids using cross-validation. A Monte Carlo study...

  17. LZW-Kernel: fast kernel utilizing variable length code blocks from LZW compressors for protein sequence classification.

    Science.gov (United States)

    Filatov, Gleb; Bauwens, Bruno; Kertész-Farkas, Attila

    2018-05-07

    Bioinformatics studies often rely on similarity measures between sequence pairs, which often pose a bottleneck in large-scale sequence analysis. Here, we present a new convolutional kernel function for protein sequences called the LZW-Kernel. It is based on code words identified with the Lempel-Ziv-Welch (LZW) universal text compressor. The LZW-Kernel is an alignment-free method, it is always symmetric, is positive, always provides 1.0 for self-similarity and it can directly be used with Support Vector Machines (SVMs) in classification problems, contrary to normalized compression distance (NCD), which often violates the distance metric properties in practice and requires further techniques to be used with SVMs. The LZW-Kernel is a one-pass algorithm, which makes it particularly plausible for big data applications. Our experimental studies on remote protein homology detection and protein classification tasks reveal that the LZW-Kernel closely approaches the performance of the Local Alignment Kernel (LAK) and the SVM-pairwise method combined with Smith-Waterman (SW) scoring at a fraction of the time. Moreover, the LZW-Kernel outperforms the SVM-pairwise method when combined with BLAST scores, which indicates that the LZW code words might be a better basis for similarity measures than local alignment approximations found with BLAST. In addition, the LZW-Kernel outperforms n-gram based mismatch kernels, hidden Markov model based SAM and Fisher kernel, and protein family based PSI-BLAST, among others. Further advantages include the LZW-Kernel's reliance on a simple idea, its ease of implementation, and its high speed, three times faster than BLAST and several magnitudes faster than SW or LAK in our tests. LZW-Kernel is implemented as a standalone C code and is a free open-source program distributed under GPLv3 license and can be downloaded from https://github.com/kfattila/LZW-Kernel. akerteszfarkas@hse.ru. Supplementary data are available at Bioinformatics Online.

  18. Small-angle neutron scattering in materials science

    International Nuclear Information System (INIS)

    Fratzl, P.

    1999-01-01

    Small-angle scattering (SAS) in an ideal tool for studying the structure of materials in the mesoscopic size range between 1 and about 100 nanometers. The basic principles of the method are reviewed, with particular emphasis on data evaluation and interpretation for isotropic as well as oriented or single-crystalline materials. Examples include metal alloys, composites and porous materials. The last section gives a comparison between the use of neutrons and (synchrotron) x-rays for small-angle scattering in materials physics. (author)

  19. Viscosity kernel of molecular fluids

    DEFF Research Database (Denmark)

    Puscasu, Ruslan; Todd, Billy; Daivis, Peter

    2010-01-01

    , temperature, and chain length dependencies of the reciprocal and real-space viscosity kernels are presented. We find that the density has a major effect on the shape of the kernel. The temperature range and chain lengths considered here have by contrast less impact on the overall normalized shape. Functional...... forms that fit the wave-vector-dependent kernel data over a large density and wave-vector range have also been tested. Finally, a structural normalization of the kernels in physical space is considered. Overall, the real-space viscosity kernel has a width of roughly 3–6 atomic diameters, which means...

  20. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  1. Sensitivity analysis for elastic full-waveform inversion in VTI media

    KAUST Repository

    Kamath, Nishant

    2014-08-05

    Multiparameter full-waveform inversion (FWI) is generally nonunique, and the results are strongly influenced by the geometry of the experiment and the type of recorded data. Studying the sensitivity of different subsets of data to the model parameters may help in choosing an optimal acquisition design, inversion workflow, and parameterization. Here, we derive the Fréchet kernel for FWI of multicomponent data from a 2D VTI (tranversely isotropic with a vertical symmetry axis) medium. The kernel is obtained by linearizing the elastic wave equation using the Born approximation and employing the asymptotic Green\\'s function. The amplitude of the kernel (‘radiation pattern’) yields the angle-dependent energy scattered by a perturbation in a certain model parameter. The perturbations are described in terms of the P- and S-wave vertical velocities and the P-wave normal-moveout and horizontal velocities. The background medium is assumed to be homogeneous and isotropic, which allows us to obtain simple expressions for the radiation patterns corresonding to all four velocities. These patterns help explain the FWI results for multicomponent transmission data generated for Gaussian anomalies in the Thomsen parameters inserted into a homogeneous VTI medium.

  2. Sensitivity analysis for elastic full-waveform inversion in VTI media

    KAUST Repository

    Kamath, Nishant; Tsvankin, Ilya

    2014-01-01

    Multiparameter full-waveform inversion (FWI) is generally nonunique, and the results are strongly influenced by the geometry of the experiment and the type of recorded data. Studying the sensitivity of different subsets of data to the model parameters may help in choosing an optimal acquisition design, inversion workflow, and parameterization. Here, we derive the Fréchet kernel for FWI of multicomponent data from a 2D VTI (tranversely isotropic with a vertical symmetry axis) medium. The kernel is obtained by linearizing the elastic wave equation using the Born approximation and employing the asymptotic Green's function. The amplitude of the kernel (‘radiation pattern’) yields the angle-dependent energy scattered by a perturbation in a certain model parameter. The perturbations are described in terms of the P- and S-wave vertical velocities and the P-wave normal-moveout and horizontal velocities. The background medium is assumed to be homogeneous and isotropic, which allows us to obtain simple expressions for the radiation patterns corresonding to all four velocities. These patterns help explain the FWI results for multicomponent transmission data generated for Gaussian anomalies in the Thomsen parameters inserted into a homogeneous VTI medium.

  3. Scattering Mechanism Extraction by a Modified Cloude-Pottier Decomposition for Dual Polarization SAR

    Directory of Open Access Journals (Sweden)

    Kefeng Ji

    2015-06-01

    Full Text Available Dual polarization is a typical operational mode of polarimetric synthetic aperture radar (SAR. However, few studies have considered the scattering mechanism extraction of dual-polarization SARs. A modified Cloude-Pottier decomposition is proposed to investigate the performance of the scattering mechanism extraction of dual-polarization SARs. It is theoretically demonstrated that only HH-VV SAR can discriminate the three canonical scattering mechanisms from an isotropic surface, horizontal dipole, and isotropic dihedral. Various experiments are conducted using 21 scenes from real datasets acquired by AIRSAR, Convair-580 SAR, EMISAR, E-SAR, Pi-SAR, and RADARSAT-2. Division of the dual-polarization H-α plane is experimentally obtained. The lack of cross-polarization induces the diffusion of scattering mechanisms and their overlap in the HH-VV H-α plane. However, the performance of HH-VV SAR for extracting scattering mechanisms is acceptable. Thus, HH-VV SAR is a suitable alternative to full-polarization SAR in certain cases. Meanwhile, the extraction performance of the other two dual-polarization SARs is badly degraded due to the lack of co-polarization. Therefore, HH-HV and HV-VV SARs cannot effectively extract the scattering mechanisms in the H-α plane.

  4. Angle gathers in wave-equation imaging for transversely isotropic media

    KAUST Repository

    Alkhalifah, Tariq Ali; Fomel, Sergey B.

    2010-01-01

    In recent years, wave-equation imaged data are often presented in common-image angle-domain gathers as a decomposition in the scattering angle at the reflector, which provide a natural access to analysing migration velocities and amplitudes. In the case of anisotropic media, the importance of angle gathers is enhanced by the need to properly estimate multiple anisotropic parameters for a proper representation of the medium. We extract angle gathers for each downward-continuation step from converting offset-frequency planes into angle-frequency planes simultaneously with applying the imaging condition in a transversely isotropic with a vertical symmetry axis (VTI) medium. The analytic equations, though cumbersome, are exact within the framework of the acoustic approximation. They are also easily programmable and show that angle gather mapping in the case of anisotropic media differs from its isotropic counterpart, with the difference depending mainly on the strength of anisotropy. Synthetic examples demonstrate the importance of including anisotropy in the angle gather generation as mapping of the energy is negatively altered otherwise. In the case of a titled axis of symmetry (TTI), the same VTI formulation is applicable but requires a rotation of the wavenumbers. © 2010 European Association of Geoscientists & Engineers.

  5. Angle gathers in wave-equation imaging for transversely isotropic media

    KAUST Repository

    Alkhalifah, Tariq Ali

    2010-11-12

    In recent years, wave-equation imaged data are often presented in common-image angle-domain gathers as a decomposition in the scattering angle at the reflector, which provide a natural access to analysing migration velocities and amplitudes. In the case of anisotropic media, the importance of angle gathers is enhanced by the need to properly estimate multiple anisotropic parameters for a proper representation of the medium. We extract angle gathers for each downward-continuation step from converting offset-frequency planes into angle-frequency planes simultaneously with applying the imaging condition in a transversely isotropic with a vertical symmetry axis (VTI) medium. The analytic equations, though cumbersome, are exact within the framework of the acoustic approximation. They are also easily programmable and show that angle gather mapping in the case of anisotropic media differs from its isotropic counterpart, with the difference depending mainly on the strength of anisotropy. Synthetic examples demonstrate the importance of including anisotropy in the angle gather generation as mapping of the energy is negatively altered otherwise. In the case of a titled axis of symmetry (TTI), the same VTI formulation is applicable but requires a rotation of the wavenumbers. © 2010 European Association of Geoscientists & Engineers.

  6. Geometric Models for Isotropic Random Porous Media: A Review

    Directory of Open Access Journals (Sweden)

    Helmut Hermann

    2014-01-01

    Full Text Available Models for random porous media are considered. The models are isotropic both from the local and the macroscopic point of view; that is, the pores have spherical shape or their surface shows piecewise spherical curvature, and there is no macroscopic gradient of any geometrical feature. Both closed-pore and open-pore systems are discussed. The Poisson grain model, the model of hard spheres packing, and the penetrable sphere model are used; variable size distribution of the pores is included. A parameter is introduced which controls the degree of open-porosity. Besides systems built up by a single solid phase, models for porous media with the internal surface coated by a second phase are treated. Volume fraction, surface area, and correlation functions are given explicitly where applicable; otherwise numerical methods for determination are described. Effective medium theory is applied to calculate physical properties for the models such as isotropic elastic moduli, thermal and electrical conductivity, and static dielectric constant. The methods presented are exemplified by applications: small-angle scattering of systems showing fractal-like behavior in limited ranges of linear dimension, optimization of nanoporous insulating materials, and improvement of properties of open-pore systems by atomic layer deposition of a second phase on the internal surface.

  7. Partial Deconvolution with Inaccurate Blur Kernel.

    Science.gov (United States)

    Ren, Dongwei; Zuo, Wangmeng; Zhang, David; Xu, Jun; Zhang, Lei

    2017-10-17

    Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning-based models to suppress the adverse effect of kernel estimation error. Furthermore, an E-M algorithm is developed for estimating the partial map and recovering the latent sharp image alternatively. Experimental results show that our partial deconvolution model is effective in relieving artifacts caused by inaccurate blur kernel, and can achieve favorable deblurring quality on synthetic and real blurry images.Most non-blind deconvolution methods are developed under the error-free kernel assumption, and are not robust to inaccurate blur kernel. Unfortunately, despite the great progress in blind deconvolution, estimation error remains inevitable during blur kernel estimation. Consequently, severe artifacts such as ringing effects and distortions are likely to be introduced in the non-blind deconvolution stage. In this paper, we tackle this issue by suggesting: (i) a partial map in the Fourier domain for modeling kernel estimation error, and (ii) a partial deconvolution model for robust deblurring with inaccurate blur kernel. The partial map is constructed by detecting the reliable Fourier entries of estimated blur kernel. And partial deconvolution is applied to wavelet-based and learning

  8. Software correction of scatter coincidence in positron CT

    International Nuclear Information System (INIS)

    Endo, M.; Iinuma, T.A.

    1984-01-01

    This paper describes a software correction of scatter coincidence in positron CT which is based on an estimation of scatter projections from true projections by an integral transform. Kernels for the integral transform are projected distributions of scatter coincidences for a line source at different positions in a water phantom and are calculated by Klein-Nishina's formula. True projections of any composite object can be determined from measured projections by iterative applications of the integral transform. The correction method was tested in computer simulations and phantom experiments with Positologica. The results showed that effects of scatter coincidence are not negligible in the quantitation of images, but the correction reduces them significantly. (orig.)

  9. Kernel methods for deep learning

    OpenAIRE

    Cho, Youngmin

    2012-01-01

    We introduce a new family of positive-definite kernels that mimic the computation in large neural networks. We derive the different members of this family by considering neural networks with different activation functions. Using these kernels as building blocks, we also show how to construct other positive-definite kernels by operations such as composition, multiplication, and averaging. We explore the use of these kernels in standard models of supervised learning, such as support vector mach...

  10. Percolation-enhanced nonlinear scattering from semicontinuous metal films

    Science.gov (United States)

    Breit, M.; von Plessen, G.; Feldmann, J.; Podolskiy, V. A.; Sarychev, A. K.; Shalaev, V. M.; Gresillon, S.; Rivoal, J. C.; Gadenne, P.

    2001-03-01

    Strongly enhanced second-harmonic generation (SHG), which is characterized by nearly isotropic distribution, is observed for gold-glass films near the percolation threshold. The diffuse-like SHG scattering, which can be thought of as nonlinear critical opalescence, is in sharp contrast with highly collimated linear reflection and transmission from these nanostructured semicontinuous metal films. Our observations, which can be explained by giant fluctuations of local nonlinear sources for SHG, verify recent predictions of percolation-enhanced nonlinear scattering.

  11. Numerical modelling of multiple scattering between two elastical particles

    DEFF Research Database (Denmark)

    Bjørnø, Irina; Jensen, Leif Bjørnø

    1998-01-01

    in suspension have been studied extensively since Foldy's formulation of his theory for isotropic scattering by randomly distributed scatterers. However, a number of important problems related to multiple scattering are still far from finding their solutions. A particular, but still unsolved, problem......Multiple acoustical signal interactions with sediment particles in the vicinity of the seabed may significantly change the course of sediment concentration profiles determined by inversion from acoustical backscattering measurements. The scattering properties of high concentrations of sediments...... is the question of proximity thresholds for influence of multiple scattering in terms of particle properties like volume fraction, average distance between particles or other related parameters. A few available experimental data indicate a significance of multiple scattering in suspensions where the concentration...

  12. Calculation of electron and isotopes dose point kernels with FLUKA Monte Carlo code for dosimetry in nuclear medicine therapy

    CERN Document Server

    Mairani, A; Valente, M; Battistoni, G; Botta, F; Pedroli, G; Ferrari, A; Cremonesi, M; Di Dia, A; Ferrari, M; Fasso, A

    2011-01-01

    Purpose: The calculation of patient-specific dose distribution can be achieved by Monte Carlo simulations or by analytical methods. In this study, FLUKA Monte Carlo code has been considered for use in nuclear medicine dosimetry. Up to now, FLUKA has mainly been dedicated to other fields, namely high energy physics, radiation protection, and hadrontherapy. When first employing a Monte Carlo code for nuclear medicine dosimetry, its results concerning electron transport at energies typical of nuclear medicine applications need to be verified. This is commonly achieved by means of calculation of a representative parameter and comparison with reference data. Dose point kernel (DPK), quantifying the energy deposition all around a point isotropic source, is often the one. Methods: FLUKA DPKS have been calculated in both water and compact bone for monoenergetic electrons (10-3 MeV) and for beta emitting isotopes commonly used for therapy ((89)Sr, (90)Y, (131)I, (153)Sm, (177)Lu, (186)Re, and (188)Re). Point isotropic...

  13. Scattering and absorption of particles emitted by a point source in a cluster of point scatterers

    International Nuclear Information System (INIS)

    Liljequist, D.

    2012-01-01

    A theory for the scattering and absorption of particles isotropically emitted by a point source in a cluster of point scatterers is described and related to the theory for the scattering of an incident particle beam. The quantum mechanical probability of escape from the cluster in different directions is calculated, as well as the spatial distribution of absorption events within the cluster. A source strength renormalization procedure is required. The average quantum scattering in clusters with randomly shifting scatterer positions is compared to trajectory simulation with the aim of studying the validity of the trajectory method. Differences between the results of the quantum and trajectory methods are found primarily for wavelengths larger than the average distance between nearest neighbour scatterers. The average quantum results include, for example, a local minimum in the number of absorption events at the location of the point source and interference patterns in the angle-dependent escape probability as well as in the distribution of absorption events. The relative error of the trajectory method is in general, though not generally, of similar magnitude as that obtained for beam scattering.

  14. Improved method for estimating particle scattering probabilities to finite detectors for Monte Carlo simulation

    International Nuclear Information System (INIS)

    Mickael, M.; Gardner, R.P.; Verghese, K.

    1988-01-01

    An improved method for calculating the total probability of particle scattering within the solid angle subtended by finite detectors is developed, presented, and tested. The limiting polar and azimuthal angles subtended by the detector are measured from the direction that most simplifies their calculation rather than from the incident particle direction. A transformation of the particle scattering probability distribution function (pdf) is made to match the transformation of the direction from which the limiting angles are measured. The particle scattering probability to the detector is estimated by evaluating the integral of the transformed pdf over the range of the limiting angles measured from the preferred direction. A general formula for transforming the particle scattering pdf is derived from basic principles and applied to four important scattering pdf's; namely, isotropic scattering in the Lab system, isotropic neutron scattering in the center-of-mass system, thermal neutron scattering by the free gas model, and gamma-ray Klein-Nishina scattering. Some approximations have been made to these pdf's to enable analytical evaluations of the final integrals. These approximations are shown to be valid over a wide range of energies and for most elements. The particle scattering probability to spherical, planar circular, and right circular cylindrical detectors has been calculated using the new and previously reported direct approach. Results indicate that the new approach is valid and is computationally faster by orders of magnitude

  15. 7 CFR 981.9 - Kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Kernel weight. 981.9 Section 981.9 Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements... Regulating Handling Definitions § 981.9 Kernel weight. Kernel weight means the weight of kernels, including...

  16. Raman scattering in a Heisenberg S = 1/2 antiferromagnet on the anisotropic triangular lattice

    International Nuclear Information System (INIS)

    Perkins, Natalia; Brenig, Wolfram

    2009-01-01

    We investigate two-magnon Raman scattering from the S = 1/2 Heisenberg antiferromagnet on the triangular lattice (THAF), considering both isotropic and anisotropic exchange interactions. We find that the Raman intensity for the isotropic THAF is insensitive to the scattering geometry, while both the line profile and the intensity of the Raman response for the anisotropic THAF shows a strong dependence on the scattering geometry. For the isotropic case we present an analytical and numerical study of the Raman intensity including both the effect of renormalization of the one-magnon spectrum by 1 = S corrections and final-state magnonmagnon interactions. The bare Raman intensity displays two peaks related to one-magnon van-Hove singularities. We find that 1 = S self-energy corrections to the one-magnon spectrum strongly modify this intensity profile. The central Raman-peak is significantly enhanced due to plateaus in the magnon dispersion, the high frequency peak is suppressed due to magnon damping, and the overall spectral support narrows considerably. Additionally we investigate final-state interactions by solving the Bethe-Salpeter equation to O(1 = S). In contrast to collinear antiferromagnets, the non-collinear nature of the magnetic ground state leads to an irreducible magnon scattering which is retarded and non-separable already to lowest order. We show that final-state interactions lead to a rather broad Raman-continuum centered around approximately twice the 'roton'-energy.

  17. Veto-Consensus Multiple Kernel Learning

    NARCIS (Netherlands)

    Zhou, Y.; Hu, N.; Spanos, C.J.

    2016-01-01

    We propose Veto-Consensus Multiple Kernel Learning (VCMKL), a novel way of combining multiple kernels such that one class of samples is described by the logical intersection (consensus) of base kernelized decision rules, whereas the other classes by the union (veto) of their complements. The

  18. 7 CFR 51.2295 - Half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half kernel. 51.2295 Section 51.2295 Agriculture... Standards for Shelled English Walnuts (Juglans Regia) Definitions § 51.2295 Half kernel. Half kernel means the separated half of a kernel with not more than one-eighth broken off. ...

  19. An Approximate Approach to Automatic Kernel Selection.

    Science.gov (United States)

    Ding, Lizhong; Liao, Shizhong

    2016-02-02

    Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.

  20. Iterative software kernels

    Energy Technology Data Exchange (ETDEWEB)

    Duff, I.

    1994-12-31

    This workshop focuses on kernels for iterative software packages. Specifically, the three speakers discuss various aspects of sparse BLAS kernels. Their topics are: `Current status of user lever sparse BLAS`; Current status of the sparse BLAS toolkit`; and `Adding matrix-matrix and matrix-matrix-matrix multiply to the sparse BLAS toolkit`.

  1. Viscozyme L pretreatment on palm kernels improved the aroma of palm kernel oil after kernel roasting.

    Science.gov (United States)

    Zhang, Wencan; Leong, Siew Mun; Zhao, Feifei; Zhao, Fangju; Yang, Tiankui; Liu, Shaoquan

    2018-05-01

    With an interest to enhance the aroma of palm kernel oil (PKO), Viscozyme L, an enzyme complex containing a wide range of carbohydrases, was applied to alter the carbohydrates in palm kernels (PK) to modulate the formation of volatiles upon kernel roasting. After Viscozyme treatment, the content of simple sugars and free amino acids in PK increased by 4.4-fold and 4.5-fold, respectively. After kernel roasting and oil extraction, significantly more 2,5-dimethylfuran, 2-[(methylthio)methyl]-furan, 1-(2-furanyl)-ethanone, 1-(2-furyl)-2-propanone, 5-methyl-2-furancarboxaldehyde and 2-acetyl-5-methylfuran but less 2-furanmethanol and 2-furanmethanol acetate were found in treated PKO; the correlation between their formation and simple sugar profile was estimated by using partial least square regression (PLS1). Obvious differences in pyrroles and Strecker aldehydes were also found between the control and treated PKOs. Principal component analysis (PCA) clearly discriminated the treated PKOs from that of control PKOs on the basis of all volatile compounds. Such changes in volatiles translated into distinct sensory attributes, whereby treated PKO was more caramelic and burnt after aqueous extraction and more nutty, roasty, caramelic and smoky after solvent extraction. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. A kernel version of spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    . Schölkopf et al. introduce kernel PCA. Shawe-Taylor and Cristianini is an excellent reference for kernel methods in general. Bishop and Press et al. describe kernel methods among many other subjects. Nielsen and Canty use kernel PCA to detect change in univariate airborne digital camera images. The kernel...... version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply kernel versions of PCA, maximum autocorrelation factor (MAF) analysis...

  3. How to simplify transmission-based scatter correction for clinical application

    International Nuclear Information System (INIS)

    Baccarne, V.; Hutton, B.F.

    1998-01-01

    Full text: The performances of ordered subsets (OS) EM reconstruction including attenuation, scatter and spatial resolution correction are evaluated using cardiac Monte Carlo data. We demonstrate how simplifications in the scatter model allow one to correct SPECT data for scatter in terms of quantitation and quality in a reasonable time. Initial reconstruction of the 20% window is performed including attenuation correction (broad beam μ values), to estimate the activity quantitatively (accuracy 3%), but not spatially. A rough reconstruction with 2 iterations (subset size: 8) is sufficient for subsequent scatter correction. Estimation of primary photons is obtained by projecting the previous distribution including attenuation (narrow beam μ values). Estimation of the scatter is obtained by convolving the primary estimates by a depth dependent scatter kernel, and scaling the result by a factor calculated from the attenuation map. The correction can be accelerated by convolving several adjacent planes with the same kernel, and using an average scaling factor. Simulation of the effects of the collimator during the scatter correction was demonstrated to be unnecessary. Final reconstruction is performed using 6 iterations OSEM, including attenuation (narrow beam μ values) and spatial resolution correction. Scatter correction is implemented by incorporating the estimated scatter as a constant offset in the forward projection step. The total correction + reconstruction (64 proj. 40x128 pixel) takes 38 minutes on a Sun Sparc 20. Quantitatively, the accuracy is 7% in a reconstructed slice. The SNR inside the whole myocardium (defined from the original object), is equal to 2.1 and 2.3 - in the corrected and the primary slices respectively. The scatter correction preserves the myocardium to ventricle contrast (primary: 0.79, corrected: 0.82). These simplifications allow acceleration of correction without influencing the quality of the result

  4. Thermalization vs. isotropization and azimuthal fluctuations

    International Nuclear Information System (INIS)

    Mrowczynski, Stanislaw

    2005-01-01

    Hydrodynamic description requires a local thermodynamic equilibrium of the system under study but an approximate hydrodynamic behaviour is already manifested when a momentum distribution of liquid components is not of equilibrium form but merely isotropic. While the process of equilibration is relatively slow, the parton system becomes isotropic rather fast due to the plasma instabilities. Azimuthal fluctuations observed in relativistic heavy-ion collisions are argued to distinguish between a fully equilibrated and only isotropic parton system produced in the collision early stage

  5. 7 CFR 51.1441 - Half-kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Half-kernel. 51.1441 Section 51.1441 Agriculture... Standards for Grades of Shelled Pecans Definitions § 51.1441 Half-kernel. Half-kernel means one of the separated halves of an entire pecan kernel with not more than one-eighth of its original volume missing...

  6. THERMAL: A routine designed to calculate neutron thermal scattering. Revision 1

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1995-01-01

    THERMAL is designed to calculate neutron thermal scattering that is elastic and isotropic in the center of mass system. At low energy thermal motion will be included. At high energies the target nuclei are assumed to be stationary. The point of transition between low and high energies has been defined to insure a smooth transition. It is assumed that at low energy the elastic cross section is constant in the relative system. At high energy the cross section can be of any form. You can use this routine for all energies where the elastic scattering is isotropic in the center of mass system. In most materials this will be a fairly high energy, e.g., the keV energy range. The THERMAL method is simple, clean, easy to understand, and most important very efficient; on a SUN SPARC-10 workstation, at low energies with thermal scattering it can do almost 6 million scatters a minute and at high energy over 13 million. Warning: This version of THERMAL completely supersedes the original version described in the same report number, dated February 24, 1995. The method used in the original code is incorrect, as explained in this report

  7. Scaling limit of deeply virtual Compton scattering

    Energy Technology Data Exchange (ETDEWEB)

    A. Radyushkin

    2000-07-01

    The author outlines a perturbative QCD approach to the analysis of the deeply virtual Compton scattering process {gamma}{sup *}p {r_arrow} {gamma}p{prime} in the limit of vanishing momentum transfer t=(p{prime}{minus}p){sup 2}. The DVCS amplitude in this limit exhibits a scaling behavior described by a two-argument distributions F(x,y) which specify the fractions of the initial momentum p and the momentum transfer r {equivalent_to} p{prime}{minus}p carried by the constituents of the nucleon. The kernel R(x,y;{xi},{eta}) governing the evolution of the non-forward distributions F(x,y) has a remarkable property: it produces the GLAPD evolution kernel P(x/{xi}) when integrated over y and reduces to the Brodsky-Lepage evolution kernel V(y,{eta}) after the x-integration. This property is used to construct the solution of the one-loop evolution equation for the flavor non-singlet part of the non-forward quark distribution.

  8. Time-dependent scattering in resonance lines

    International Nuclear Information System (INIS)

    Kunasz, P.B.

    1983-01-01

    A numerical finite-difference method is presented for the problem of time-dependent line transfer in a finite slab in which material density is sufficiently low that the time of flight between scatterings greatly exceeds the relaxation time of the upper state of the scattering transition. The medium is assumed to scatter photons isotropically, with complete frequency redistribution. Numerical solutions are presented for a homogeneous, time-independent slab illuminated by an externally imposed radiation field which enters the slab at t = 0. Graphical results illustrate relaxation to steady state of trapped internal radiation, emergent energy, and emergent profiles. A review of the literature is also given in which the time-dependent line transfer problem is discussed in the context of recent analytical work

  9. Criticality problems in energy dependent neutron transport theory

    International Nuclear Information System (INIS)

    Victory, H.D. Jr.

    1979-01-01

    The criticality problem is considered for energy dependent neutron transport in an isotropically scattering, homogeneous slab. Under a positivity assumption on the scattering kernel, an expression can be found relating the thickness of the slab to a parameter characterizing production by fission. This is accomplished by exploiting the Perron-Frobenius-Jentsch characterization of positive operators (i.e. those leaving invariant a normal, reproducing cone in a Banach space). It is pointed out that those techniques work for classes of multigroup problems were the Case singular eigenfunction approach is not as feasible as in the one-group theory, which is also analyzed

  10. The albedo problem in the case of multiple synthetic scattering taking place in a plane-symmetric slab

    International Nuclear Information System (INIS)

    Shafiq, A.; Meyer, H.E. de; Grosjean, C.C.

    1985-01-01

    An approximate model based on an improved diffusion-type theory is established for treating multiple synthetic scattering in a homogeneous slab of finite thickness. As in the case of the exact treatment given in the preceding paper (Part I), it appears possible to transform the considered transport problem into an equivalent fictitious one involving multiple isotropic scattering, therefore permitting the application of an established corrected diffusion theory for treating isotropic scattering taking place in a convex homogeneous medium bounded by a vacuum in the presence of various types of sources. The approximate values of the reflection and transmission coefficients are compared with the rigorous values listed in Part I. In this way, the high accuracy of the approximation is clearly demonstrated. (author)

  11. Local Observed-Score Kernel Equating

    Science.gov (United States)

    Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.

    2014-01-01

    Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…

  12. Credit scoring analysis using kernel discriminant

    Science.gov (United States)

    Widiharih, T.; Mukid, M. A.; Mustafid

    2018-05-01

    Credit scoring model is an important tool for reducing the risk of wrong decisions when granting credit facilities to applicants. This paper investigate the performance of kernel discriminant model in assessing customer credit risk. Kernel discriminant analysis is a non- parametric method which means that it does not require any assumptions about the probability distribution of the input. The main ingredient is a kernel that allows an efficient computation of Fisher discriminant. We use several kernel such as normal, epanechnikov, biweight, and triweight. The models accuracy was compared each other using data from a financial institution in Indonesia. The results show that kernel discriminant can be an alternative method that can be used to determine who is eligible for a credit loan. In the data we use, it shows that a normal kernel is relevant to be selected for credit scoring using kernel discriminant model. Sensitivity and specificity reach to 0.5556 and 0.5488 respectively.

  13. Influence of orientation averaging on the anisotropy of thermal neutrons scattering on water molecules

    International Nuclear Information System (INIS)

    Markovic, M. I.; Radunovic, J. B.

    1976-01-01

    Determination of spatial distribution of neutron flux in water, most frequently used moderator in thermal reactors, demands microscopic scattering kernels dependence on cosine of thermal neutrons scattering angle when solving the Boltzmann equation. Since spatial orientation of water molecules influences this dependence it is necessary to perform orientation averaging or rotation-vibrational intermediate scattering function for water molecules. The calculations described in this paper and the obtained results showed that methods of orientation averaging do not influence the anisotropy of thermal neutrons scattering on water molecules, but do influence the inelastic scattering

  14. Canonical Quantization of Crystal Dislocation and Electron-Dislocation Scattering in an Isotropic Media

    Science.gov (United States)

    Li, Mingda; Cui, Wenping; Dresselhaus, M. S.; Chen, Gang; MIT Team; Boston College Team

    Crystal dislocations govern the plastic mechanical properties of materials but also affect the electrical and optical properties. However, a fundamental and decent quantum-mechanical theory of dislocation remains undiscovered for decades. Here we present an exact and manageable Hamiltonian theory for both edge and screw dislocation line in an isotropic media, where the effective Hamiltonian of a single dislocation line can be written in a harmonic-oscillator-like form, with closed-form quantized 1D phonon-like excitation. Moreover a closed-form, position dependent electron-dislocation coupling strength is obtained, from which we obtained good agreement of relaxation time when comparing with classical results. This Hamiltonian provides a platform to study the effect of dislocation to materials' non-mechanical properties from a fundamental Hamiltonian level.

  15. Wave functions, evolution equations and evolution kernels form light-ray operators of QCD

    International Nuclear Information System (INIS)

    Mueller, D.; Robaschik, D.; Geyer, B.; Dittes, F.M.; Horejsi, J.

    1994-01-01

    The widely used nonperturbative wave functions and distribution functions of QCD are determined as matrix elements of light-ray operators. These operators appear as large momentum limit of non-local hardron operators or as summed up local operators in light-cone expansions. Nonforward one-particle matrix elements of such operators lead to new distribution amplitudes describing both hadrons simultaneously. These distribution functions depend besides other variables on two scaling variables. They are applied for the description of exclusive virtual Compton scattering in the Bjorken region near forward direction and the two meson production process. The evolution equations for these distribution amplitudes are derived on the basis of the renormalization group equation of the considered operators. This includes that also the evolution kernels follow from the anomalous dimensions of these operators. Relations between different evolution kernels (especially the Altarelli-Parisi and the Brodsky-Lepage kernels) are derived and explicitly checked for the existing two-loop calculations of QCD. Technical basis of these resluts are support and analytically properties of the anomalous dimensions of light-ray operators obtained with the help of the α-representation of Green's functions. (orig.)

  16. Higher-order predictions for splitting functions and coefficient functions from physical evolution kernels

    International Nuclear Information System (INIS)

    Vogt, A; Soar, G.; Vermaseren, J.A.M.

    2010-01-01

    We have studied the physical evolution kernels for nine non-singlet observables in deep-inelastic scattering (DIS), semi-inclusive e + e - annihilation and the Drell-Yan (DY) process, and for the flavour-singlet case of the photon- and heavy-top Higgs-exchange structure functions (F 2 , F φ ) in DIS. All known contributions to these kernels show an only single-logarithmic large-x enhancement at all powers of (1-x). Conjecturing that this behaviour persists to (all) higher orders, we have predicted the highest three (DY: two) double logarithms of the higher-order non-singlet coefficient functions and of the four-loop singlet splitting functions. The coefficient-function predictions can be written as exponentiations of 1/N-suppressed contributions in Mellin-N space which, however, are less predictive than the well-known exponentiation of the ln k N terms. (orig.)

  17. Kernel parameter dependence in spatial factor analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    kernel PCA. Shawe-Taylor and Cristianini [4] is an excellent reference for kernel methods in general. Bishop [5] and Press et al. [6] describe kernel methods among many other subjects. The kernel version of PCA handles nonlinearities by implicitly transforming data into high (even infinite) dimensional...... feature space via the kernel function and then performing a linear analysis in that space. In this paper we shall apply a kernel version of maximum autocorrelation factor (MAF) [7, 8] analysis to irregularly sampled stream sediment geochemistry data from South Greenland and illustrate the dependence...... of the kernel width. The 2,097 samples each covering on average 5 km2 are analyzed chemically for the content of 41 elements....

  18. Photon beam convolution using polyenergetic energy deposition kernels

    International Nuclear Information System (INIS)

    Hoban, P.W.; Murray, D.C.; Round, W.H.

    1994-01-01

    In photon beam convolution calculations where polyenergetic energy deposition kernels (EDKs) are used, the primary photon energy spectrum should be correctly accounted for in Monte Carlo generation of EDKs. This requires the probability of interaction, determined by the linear attenuation coefficient, μ, to be taken into account when primary photon interactions are forced to occur at the EDK origin. The use of primary and scattered EDKs generated with a fixed photon spectrum can give rise to an error in the dose calculation due to neglecting the effects of beam hardening with depth. The proportion of primary photon energy that is transferred to secondary electrons increases with depth of interaction, due to the increase in the ratio μ ab /μ as the beam hardens. Convolution depth-dose curves calculated using polyenergetic EDKs generated for the primary photon spectra which exist at depths of 0, 20 and 40 cm in water, show a fall-off which is too steep when compared with EGS4 Monte Carlo results. A beam hardening correction factor applied to primary and scattered 0 cm EDKs, based on the ratio of kerma to terma at each depth, gives primary, scattered and total dose in good agreement with Monte Carlo results. (Author)

  19. Ultrasound scatter in heterogeneous 3D microstructures: Parameters affecting multiple scattering

    Science.gov (United States)

    Engle, B. J.; Roberts, R. A.; Grandin, R. J.

    2018-04-01

    This paper reports on a computational study of ultrasound propagation in heterogeneous metal microstructures. Random spatial fluctuations in elastic properties over a range of length scales relative to ultrasound wavelength can give rise to scatter-induced attenuation, backscatter noise, and phase front aberration. It is of interest to quantify the dependence of these phenomena on the microstructure parameters, for the purpose of quantifying deleterious consequences on flaw detectability, and for the purpose of material characterization. Valuable tools for estimation of microstructure parameters (e.g. grain size) through analysis of ultrasound backscatter have been developed based on approximate weak-scattering models. While useful, it is understood that these tools display inherent inaccuracy when multiple scattering phenomena significantly contribute to the measurement. It is the goal of this work to supplement weak scattering model predictions with corrections derived through application of an exact computational scattering model to explicitly prescribed microstructures. The scattering problem is formulated as a volume integral equation (VIE) displaying a convolutional Green-function-derived kernel. The VIE is solved iteratively employing FFT-based con-volution. Realizations of random microstructures are specified on the micron scale using statistical property descriptions (e.g. grain size and orientation distributions), which are then spatially filtered to provide rigorously equivalent scattering media on a length scale relevant to ultrasound propagation. Scattering responses from ensembles of media representations are averaged to obtain mean and variance of quantities such as attenuation and backscatter noise levels, as a function of microstructure descriptors. The computational approach will be summarized, and examples of application will be presented.

  20. Quasi-elastic neutron scattering study of a re-entrant side-chain liquid-crystal polyacrylate

    Science.gov (United States)

    Benguigui, L.; Noirez, L.; Kahn, R.; Keller, P.; Lambert, M.; Cohen de Lara, E.

    1991-04-01

    We present a first investigation of the dynamics of a side chain liquid crystal polyacrylate in the isotropic (I), nematic (N), smectic A (SA), and re-entrant nematic (NRe) phases by means of quasi-elastic neutron scattering. The motion or/and the mobility of the mesogen protons decreases as soon as the temperature decreases after the isotropic-nematic transition. The I-N and SA-NRe transitions corrspond to a jump in the curve of the Elastic Incoherent Structure Factor (ratio: elastic scattering/ total scattering) versus temperature, on the other hand the transition N-SA occurs without any change of slope. We conclude that the local order is very similar in the nematic and the smectic A phases. Nous présentons une première étude dynamique par diffusion quasi-élastique des neutrons, d'un échantillon de polyacrylate mésomorphe en peigne dans chacune des phases : isotrope, nématique, smectique et nématique rentrante. On montre que le mouvement et/ou la mobilité des protons du mésogène se restreint à mesure que la température diminue après la transition isotrope-nématique. Contrairement à la transition N-SA, les transitions I-N et SA-NRe correspondent à une discontinuité dans la courbe du Facteur de Structure Incohérent Elastique (rapport : intensité élastique/intensité totale) en fonction de la température ; l'ordre local semble donc très proche pour les phases nématique et smectique.

  1. Multiple Kernel Learning with Data Augmentation

    Science.gov (United States)

    2016-11-22

    JMLR: Workshop and Conference Proceedings 63:49–64, 2016 ACML 2016 Multiple Kernel Learning with Data Augmentation Khanh Nguyen nkhanh@deakin.edu.au...University, Australia Editors: Robert J. Durrant and Kee-Eung Kim Abstract The motivations of multiple kernel learning (MKL) approach are to increase... kernel expres- siveness capacity and to avoid the expensive grid search over a wide spectrum of kernels . A large amount of work has been proposed to

  2. OS X and iOS Kernel Programming

    CERN Document Server

    Halvorsen, Ole Henry

    2011-01-01

    OS X and iOS Kernel Programming combines essential operating system and kernel architecture knowledge with a highly practical approach that will help you write effective kernel-level code. You'll learn fundamental concepts such as memory management and thread synchronization, as well as the I/O Kit framework. You'll also learn how to write your own kernel-level extensions, such as device drivers for USB and Thunderbolt devices, including networking, storage and audio drivers. OS X and iOS Kernel Programming provides an incisive and complete introduction to the XNU kernel, which runs iPhones, i

  3. Model selection for Gaussian kernel PCA denoising

    DEFF Research Database (Denmark)

    Jørgensen, Kasper Winther; Hansen, Lars Kai

    2012-01-01

    We propose kernel Parallel Analysis (kPA) for automatic kernel scale and model order selection in Gaussian kernel PCA. Parallel Analysis [1] is based on a permutation test for covariance and has previously been applied for model order selection in linear PCA, we here augment the procedure to also...... tune the Gaussian kernel scale of radial basis function based kernel PCA.We evaluate kPA for denoising of simulated data and the US Postal data set of handwritten digits. We find that kPA outperforms other heuristics to choose the model order and kernel scale in terms of signal-to-noise ratio (SNR...

  4. Obtaining the Bidirectional Transfer Distribution Function ofIsotropically Scattering Materials Using an Integrating Sphere

    Energy Technology Data Exchange (ETDEWEB)

    Jonsson, Jacob C.; Branden, Henrik

    2006-10-19

    This paper demonstrates a method to determine thebidirectional transfer distribution function (BTDF) using an integratingsphere. Information about the sample's angle dependent scattering isobtained by making transmittance measurements with the sample atdifferent distances from the integrating sphere. Knowledge about theilluminated area of the sample and the geometry of the sphere port incombination with the measured data combines to an system of equationsthat includes the angle dependent transmittance. The resulting system ofequations is an ill-posed problem which rarely gives a physical solution.A solvable system is obtained by using Tikhonov regularization on theill-posed problem. The solution to this system can then be used to obtainthe BTDF. Four bulk-scattering samples were characterised using both twogoniophotometers and the described method to verify the validity of thenew method. The agreement shown is great for the more diffuse samples.The solution to the low-scattering samples contains unphysicaloscillations, butstill gives the correct shape of the solution. Theorigin of the oscillations and why they are more prominent inlow-scattering samples are discussed.

  5. Double scattering of light from Biophotonic Nanostructures with short-range order

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Heeso; Liew, Seng Fatt; Saranathan, Vinodkumar; Prum, Richard O.; Mochrie, Simon G.J.; Dufresne, Eric R.; Cao, Hui (Yale)

    2010-07-28

    We investigate the physical mechanism for color production by isotropic nanostructures with short-range order in bird feather barbs. While the primary peak in optical scattering spectra results from constructive interference of singly-scattered light, many species exhibit secondary peaks with distinct characteristic. Our experimental and numerical studies show that these secondary peaks result from double scattering of light by the correlated structures. Without an analog in periodic or random structures, such a phenomenon is unique for short-range ordered structures, and has been widely used by nature for non-iridescent structural coloration.

  6. Synthetic acceleration methods for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.; Larsen, E.W.

    1992-01-01

    The diffusion synthetic acceleration (DSA) algorithm effectively accelerates the iterative solution of transport problems with isotropic or mildly anisotropic scattering. However, DSA loses its effectiveness for transport problems that have strongly anisotropic scattering. Two generalizations of DSA are proposed, which, for highly anisotropic scattering problems, converge at least an order of magnitude (clock time) faster than the DSA method. These two methods are developed, the results of Fourier analysis that theoretically predict their efficiency are described, and numerical results that verify the theoretical predictions are presented. (author). 10 refs., 7 figs., 5 tabs

  7. Synthetic acceleration methods for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.; Larsen, E.W.

    1991-01-01

    This paper reports on the diffusion synthetic acceleration (DSA) algorithm that effectively accelerates the iterative solution of transport problems with isotropic or mildly anisotropic scattering. However, DSA loses its effectiveness for transport problems that have strongly anisotropic scattering. Two generalizations of DSA are proposed, which, for highly anisotropic scattering problems, converge at least an order of magnitude (clock time) faster than the DSA method. These two methods are developed, the results of Fourier analyses that theoretically predict their efficiency are described, and numerical results that verify the theoretical predictions are presented

  8. Paramecium: An Extensible Object-Based Kernel

    NARCIS (Netherlands)

    van Doorn, L.; Homburg, P.; Tanenbaum, A.S.

    1995-01-01

    In this paper we describe the design of an extensible kernel, called Paramecium. This kernel uses an object-based software architecture which together with instance naming, late binding and explicit overrides enables easy reconfiguration. Determining which components reside in the kernel protection

  9. Modeling of high‐frequency seismic‐wave scattering and propagation using radiative transfer theory

    Science.gov (United States)

    Zeng, Yuehua

    2017-01-01

    This is a study of the nonisotropic scattering process based on radiative transfer theory and its application to the observation of the M 4.3 aftershock recording of the 2008 Wells earthquake sequence in Nevada. Given a wide range of recording distances from 29 to 320 km, the data provide a unique opportunity to discriminate scattering models based on their distance‐dependent behaviors. First, we develop a stable numerical procedure to simulate nonisotropic scattering waves based on the 3D nonisotropic scattering theory proposed by Sato (1995). By applying the simulation method to the inversion of M 4.3 Wells aftershock recordings, we find that a nonisotropic scattering model, dominated by forward scattering, provides the best fit to the observed high‐frequency direct S waves and S‐wave coda velocity envelopes. The scattering process is governed by a Gaussian autocorrelation function, suggesting a Gaussian random heterogeneous structure for the Nevada crust. The model successfully explains the common decay of seismic coda independent of source–station locations as a result of energy leaking from multiple strong forward scattering, instead of backscattering governed by the diffusion solution at large lapse times. The model also explains the pulse‐broadening effect in the high‐frequency direct and early arriving S waves, as other studies have found, and could be very important to applications of high‐frequency wave simulation in which scattering has a strong effect. We also find that regardless of its physical implications, the isotropic scattering model provides the same effective scattering coefficient and intrinsic attenuation estimates as the forward scattering model, suggesting that the isotropic scattering model is still a viable tool for the study of seismic scattering and intrinsic attenuation coefficients in the Earth.

  10. Theory of reproducing kernels and applications

    CERN Document Server

    Saitoh, Saburou

    2016-01-01

    This book provides a large extension of the general theory of reproducing kernels published by N. Aronszajn in 1950, with many concrete applications. In Chapter 1, many concrete reproducing kernels are first introduced with detailed information. Chapter 2 presents a general and global theory of reproducing kernels with basic applications in a self-contained way. Many fundamental operations among reproducing kernel Hilbert spaces are dealt with. Chapter 2 is the heart of this book. Chapter 3 is devoted to the Tikhonov regularization using the theory of reproducing kernels with applications to numerical and practical solutions of bounded linear operator equations. In Chapter 4, the numerical real inversion formulas of the Laplace transform are presented by applying the Tikhonov regularization, where the reproducing kernels play a key role in the results. Chapter 5 deals with ordinary differential equations; Chapter 6 includes many concrete results for various fundamental partial differential equations. In Chapt...

  11. Kernels for structured data

    CERN Document Server

    Gärtner, Thomas

    2009-01-01

    This book provides a unique treatment of an important area of machine learning and answers the question of how kernel methods can be applied to structured data. Kernel methods are a class of state-of-the-art learning algorithms that exhibit excellent learning results in several application domains. Originally, kernel methods were developed with data in mind that can easily be embedded in a Euclidean vector space. Much real-world data does not have this property but is inherently structured. An example of such data, often consulted in the book, is the (2D) graph structure of molecules formed by

  12. Neutron Scattering from Heisenberg Ferromagnets EuO and EuS

    DEFF Research Database (Denmark)

    Als-Nielsen, Jens Aage; Dietrich, O. W.; Passell, L.

    1976-01-01

    Neutron scattering has been used to study the magnetic ordering process in the isotropic exchange coupled ferromagnets EuO and EuS. Quantities investigated include the critical coefficients B and F+ and the critical exponents β, ν, and γ describing respectively the temperature dependence...

  13. 7 CFR 981.401 - Adjusted kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Adjusted kernel weight. 981.401 Section 981.401... Administrative Rules and Regulations § 981.401 Adjusted kernel weight. (a) Definition. Adjusted kernel weight... kernels in excess of five percent; less shells, if applicable; less processing loss of one percent for...

  14. Testing Infrastructure for Operating System Kernel Development

    DEFF Research Database (Denmark)

    Walter, Maxwell; Karlsson, Sven

    2014-01-01

    Testing is an important part of system development, and to test effectively we require knowledge of the internal state of the system under test. Testing an operating system kernel is a challenge as it is the operating system that typically provides access to this internal state information. Multi......-core kernels pose an even greater challenge due to concurrency and their shared kernel state. In this paper, we present a testing framework that addresses these challenges by running the operating system in a virtual machine, and using virtual machine introspection to both communicate with the kernel...... and obtain information about the system. We have also developed an in-kernel testing API that we can use to develop a suite of unit tests in the kernel. We are using our framework for for the development of our own multi-core research kernel....

  15. Resonance scattering of Rayleigh waves by a mass defect

    International Nuclear Information System (INIS)

    Croitoru, M.; Grecu, D.

    1978-06-01

    The resonance scattering of an incident Rayleigh wave by a mass defect extending over a small cylindrical region situated in the surface of a semi-infinite isotropic, elastic medium is investigated by means of the Green's function method. The form of the differential cross-section for the scattering into different channels exhibits a strong resonance phenomenon at two frequencies. The expression of the resonance frequencies as well as of the corresponding widths depends on the relative change in mass density. The main assumption that the wavelengths of incoming and scattered wave are large compared to the defect dimension implies a large relative mass-density change. (author)

  16. 7 CFR 51.1403 - Kernel color classification.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Kernel color classification. 51.1403 Section 51.1403... STANDARDS) United States Standards for Grades of Pecans in the Shell 1 Kernel Color Classification § 51.1403 Kernel color classification. (a) The skin color of pecan kernels may be described in terms of the color...

  17. Platonic scattering cancellation for bending waves in a thin plate

    KAUST Repository

    Farhat, Mohamed

    2014-04-10

    We propose an ultra-thin elastic cloak to control the scattering of bending waves in isotropic heterogeneous thin plates. The cloak design makes use of the scattering cancellation technique applied, for the first time, to the biharmonic operator describing the propagation of bending waves in thin plates. We first analyze scattering from hard and soft cylindrical objects in the quasistatic limit, then we prove that the scattering of bending waves from an object in the near and far-field regions can be suppressed significantly by covering it with a suitably designed coating. Beyond camouflaging, these findings may have potential applications in protection of buildings from earthquakes and isolating structures from vibrations in the motor vehicle industry.

  18. Platonic scattering cancellation for bending waves in a thin plate

    KAUST Repository

    Farhat, Mohamed; Chen, P.-Y.; Bagci, Hakan; Enoch, S.; Guenneau, S.; Alù , A.

    2014-01-01

    We propose an ultra-thin elastic cloak to control the scattering of bending waves in isotropic heterogeneous thin plates. The cloak design makes use of the scattering cancellation technique applied, for the first time, to the biharmonic operator describing the propagation of bending waves in thin plates. We first analyze scattering from hard and soft cylindrical objects in the quasistatic limit, then we prove that the scattering of bending waves from an object in the near and far-field regions can be suppressed significantly by covering it with a suitably designed coating. Beyond camouflaging, these findings may have potential applications in protection of buildings from earthquakes and isolating structures from vibrations in the motor vehicle industry.

  19. Optimized cylindrical invisibility cloak with minimum layers of non-magnetic isotropic materials

    International Nuclear Information System (INIS)

    Yu Zhenzhong; Feng Yijun; Xu Xiaofei; Zhao Junming; Jiang Tian

    2011-01-01

    We present optimized design of cylindrical invisibility cloak with minimum layers of non-magnetic isotropic materials. Through an optimization procedure based on genetic algorithm, simpler cloak structure and more realizable material parameters can be achieved with better cloak performance than that of an ideal non-magnetic cloak with a reduced set of parameters. We demonstrate that a cloak shell with only five layers of two normal materials can result in an average 20 dB reduction in the scattering width for all directions when covering the inner conducting cylinder with the cloak. The optimized design can substantially simplify the realization of the invisibility cloak, especially in the optical range.

  20. Influence of stacking sequence on scattering characteristics of the fundamental anti-symmetric Lamb wave at through holes in composite laminates.

    Science.gov (United States)

    Veidt, Martin; Ng, Ching-Tai

    2011-03-01

    This paper investigates the scattering characteristics of the fundamental anti-symmetric (A(0)) Lamb wave at through holes in composite laminates. Three-dimensional (3D) finite element (FE) simulations and experimental measurements are used to study the physical phenomenon. Unidirectional, bidirectional, and quasi-isotropic composite laminates are considered in the study. The influence of different hole diameter to wavelength aspect ratios and different stacking sequences on wave scattering characteristics are investigated. The results show that amplitudes and directivity distribution of the scattered Lamb wave depend on these parameters. In the case of quasi-isotropic composite laminates, the scattering directivity patterns are dominated by the fiber orientation of the outer layers and are quite different for composite laminates with the same number of laminae but different stacking sequence. The study provides improved physical insight into the scattering phenomena at through holes in composite laminates, which is essential to develop, validate, and optimize guided wave damage detection and characterization techniques. © 2011 Acoustical Society of America

  1. The definition of kernel Oz

    OpenAIRE

    Smolka, Gert

    1994-01-01

    Oz is a concurrent language providing for functional, object-oriented, and constraint programming. This paper defines Kernel Oz, a semantically complete sublanguage of Oz. It was an important design requirement that Oz be definable by reduction to a lean kernel language. The definition of Kernel Oz introduces three essential abstractions: the Oz universe, the Oz calculus, and the actor model. The Oz universe is a first-order structure defining the values and constraints Oz computes with. The ...

  2. Fabrication of Uranium Oxycarbide Kernels for HTR Fuel

    International Nuclear Information System (INIS)

    Barnes, Charles; Richardson, Clay; Nagley, Scott; Hunn, John; Shaber, Eric

    2010-01-01

    Babcock and Wilcox (B and W) has been producing high quality uranium oxycarbide (UCO) kernels for Advanced Gas Reactor (AGR) fuel tests at the Idaho National Laboratory. In 2005, 350-(micro)m, 19.7% 235U-enriched UCO kernels were produced for the AGR-1 test fuel. Following coating of these kernels and forming the coated-particles into compacts, this fuel was irradiated in the Advanced Test Reactor (ATR) from December 2006 until November 2009. B and W produced 425-(micro)m, 14% enriched UCO kernels in 2008, and these kernels were used to produce fuel for the AGR-2 experiment that was inserted in ATR in 2010. B and W also produced 500-(micro)m, 9.6% enriched UO2 kernels for the AGR-2 experiments. Kernels of the same size and enrichment as AGR-1 were also produced for the AGR-3/4 experiment. In addition to fabricating enriched UCO and UO2 kernels, B and W has produced more than 100 kg of natural uranium UCO kernels which are being used in coating development tests. Successive lots of kernels have demonstrated consistent high quality and also allowed for fabrication process improvements. Improvements in kernel forming were made subsequent to AGR-1 kernel production. Following fabrication of AGR-2 kernels, incremental increases in sintering furnace charge size have been demonstrated. Recently small scale sintering tests using a small development furnace equipped with a residual gas analyzer (RGA) has increased understanding of how kernel sintering parameters affect sintered kernel properties. The steps taken to increase throughput and process knowledge have reduced kernel production costs. Studies have been performed of additional modifications toward the goal of increasing capacity of the current fabrication line to use for production of first core fuel for the Next Generation Nuclear Plant (NGNP) and providing a basis for the design of a full scale fuel fabrication facility.

  3. Object classification and detection with context kernel descriptors

    DEFF Research Database (Denmark)

    Pan, Hong; Olsen, Søren Ingvor; Zhu, Yaping

    2014-01-01

    Context information is important in object representation. By embedding context cue of image attributes into kernel descriptors, we propose a set of novel kernel descriptors called Context Kernel Descriptors (CKD) for object classification and detection. The motivation of CKD is to use spatial...... consistency of image attributes or features defined within a neighboring region to improve the robustness of descriptor matching in kernel space. For feature selection, Kernel Entropy Component Analysis (KECA) is exploited to learn a subset of discriminative CKD. Different from Kernel Principal Component...

  4. Ranking Support Vector Machine with Kernel Approximation.

    Science.gov (United States)

    Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi

    2017-01-01

    Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  5. Ranking Support Vector Machine with Kernel Approximation

    Directory of Open Access Journals (Sweden)

    Kai Chen

    2017-01-01

    Full Text Available Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels can give higher accuracy than linear RankSVM (RankSVM with a linear kernel for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.

  6. Dose point kernels for beta-emitting radioisotopes

    International Nuclear Information System (INIS)

    Prestwich, W.V.; Chan, L.B.; Kwok, C.S.; Wilson, B.

    1986-01-01

    Knowledge of the dose point kernel corresponding to a specific radionuclide is required to calculate the spatial dose distribution produced in a homogeneous medium by a distributed source. Dose point kernels for commonly used radionuclides have been calculated previously using as a basis monoenergetic dose point kernels derived by numerical integration of a model transport equation. The treatment neglects fluctuations in energy deposition, an effect which has been later incorporated in dose point kernels calculated using Monte Carlo methods. This work describes new calculations of dose point kernels using the Monte Carlo results as a basis. An analytic representation of the monoenergetic dose point kernels has been developed. This provides a convenient method both for calculating the dose point kernel associated with a given beta spectrum and for incorporating the effect of internal conversion. An algebraic expression for allowed beta spectra has been accomplished through an extension of the Bethe-Bacher approximation, and tested against the exact expression. Simplified expression for first-forbidden shape factors have also been developed. A comparison of the calculated dose point kernel for 32 P with experimental data indicates good agreement with a significant improvement over the earlier results in this respect. An analytic representation of the dose point kernel associated with the spectrum of a single beta group has been formulated. 9 references, 16 figures, 3 tables

  7. Rare variant testing across methods and thresholds using the multi-kernel sequence kernel association test (MK-SKAT).

    Science.gov (United States)

    Urrutia, Eugene; Lee, Seunggeun; Maity, Arnab; Zhao, Ni; Shen, Judong; Li, Yun; Wu, Michael C

    Analysis of rare genetic variants has focused on region-based analysis wherein a subset of the variants within a genomic region is tested for association with a complex trait. Two important practical challenges have emerged. First, it is difficult to choose which test to use. Second, it is unclear which group of variants within a region should be tested. Both depend on the unknown true state of nature. Therefore, we develop the Multi-Kernel SKAT (MK-SKAT) which tests across a range of rare variant tests and groupings. Specifically, we demonstrate that several popular rare variant tests are special cases of the sequence kernel association test which compares pair-wise similarity in trait value to similarity in the rare variant genotypes between subjects as measured through a kernel function. Choosing a particular test is equivalent to choosing a kernel. Similarly, choosing which group of variants to test also reduces to choosing a kernel. Thus, MK-SKAT uses perturbation to test across a range of kernels. Simulations and real data analyses show that our framework controls type I error while maintaining high power across settings: MK-SKAT loses power when compared to the kernel for a particular scenario but has much greater power than poor choices.

  8. Characterization of the angular memory effect of scattered light in biological tissues.

    Science.gov (United States)

    Schott, Sam; Bertolotti, Jacopo; Léger, Jean-Francois; Bourdieu, Laurent; Gigan, Sylvain

    2015-05-18

    High resolution optical microscopy is essential in neuroscience but suffers from scattering in biological tissues and therefore grants access to superficial brain layers only. Recently developed techniques use scattered photons for imaging by exploiting angular correlations in transmitted light and could potentially increase imaging depths. But those correlations ('angular memory effect') are of a very short range and should theoretically be only present behind and not inside scattering media. From measurements on neural tissues and complementary simulations, we find that strong forward scattering in biological tissues can enhance the memory effect range and thus the possible field-of-view by more than an order of magnitude compared to isotropic scattering for ∼1 mm thick tissue layers.

  9. Wigner functions defined with Laplace transform kernels.

    Science.gov (United States)

    Oh, Se Baek; Petruccelli, Jonathan C; Tian, Lei; Barbastathis, George

    2011-10-24

    We propose a new Wigner-type phase-space function using Laplace transform kernels--Laplace kernel Wigner function. Whereas momentum variables are real in the traditional Wigner function, the Laplace kernel Wigner function may have complex momentum variables. Due to the property of the Laplace transform, a broader range of signals can be represented in complex phase-space. We show that the Laplace kernel Wigner function exhibits similar properties in the marginals as the traditional Wigner function. As an example, we use the Laplace kernel Wigner function to analyze evanescent waves supported by surface plasmon polariton. © 2011 Optical Society of America

  10. Metabolic network prediction through pairwise rational kernels.

    Science.gov (United States)

    Roche-Lima, Abiel; Domaratzki, Michael; Fristensky, Brian

    2014-09-26

    Metabolic networks are represented by the set of metabolic pathways. Metabolic pathways are a series of biochemical reactions, in which the product (output) from one reaction serves as the substrate (input) to another reaction. Many pathways remain incompletely characterized. One of the major challenges of computational biology is to obtain better models of metabolic pathways. Existing models are dependent on the annotation of the genes. This propagates error accumulation when the pathways are predicted by incorrectly annotated genes. Pairwise classification methods are supervised learning methods used to classify new pair of entities. Some of these classification methods, e.g., Pairwise Support Vector Machines (SVMs), use pairwise kernels. Pairwise kernels describe similarity measures between two pairs of entities. Using pairwise kernels to handle sequence data requires long processing times and large storage. Rational kernels are kernels based on weighted finite-state transducers that represent similarity measures between sequences or automata. They have been effectively used in problems that handle large amount of sequence information such as protein essentiality, natural language processing and machine translations. We create a new family of pairwise kernels using weighted finite-state transducers (called Pairwise Rational Kernel (PRK)) to predict metabolic pathways from a variety of biological data. PRKs take advantage of the simpler representations and faster algorithms of transducers. Because raw sequence data can be used, the predictor model avoids the errors introduced by incorrect gene annotations. We then developed several experiments with PRKs and Pairwise SVM to validate our methods using the metabolic network of Saccharomyces cerevisiae. As a result, when PRKs are used, our method executes faster in comparison with other pairwise kernels. Also, when we use PRKs combined with other simple kernels that include evolutionary information, the accuracy

  11. UAV remote sensing atmospheric degradation image restoration based on multiple scattering APSF estimation

    Science.gov (United States)

    Qiu, Xiang; Dai, Ming; Yin, Chuan-li

    2017-09-01

    Unmanned aerial vehicle (UAV) remote imaging is affected by the bad weather, and the obtained images have the disadvantages of low contrast, complex texture and blurring. In this paper, we propose a blind deconvolution model based on multiple scattering atmosphere point spread function (APSF) estimation to recovery the remote sensing image. According to Narasimhan analytical theory, a new multiple scattering restoration model is established based on the improved dichromatic model. Then using the L0 norm sparse priors of gradient and dark channel to estimate APSF blur kernel, the fast Fourier transform is used to recover the original clear image by Wiener filtering. By comparing with other state-of-the-art methods, the proposed method can correctly estimate blur kernel, effectively remove the atmospheric degradation phenomena, preserve image detail information and increase the quality evaluation indexes.

  12. Absorbed dose kernel and self-shielding calculations for a novel radiopaque glass microsphere for transarterial radioembolization.

    Science.gov (United States)

    Church, Cody; Mawko, George; Archambault, John Paul; Lewandowski, Robert; Liu, David; Kehoe, Sharon; Boyd, Daniel; Abraham, Robert; Syme, Alasdair

    2018-02-01

    Radiopaque microspheres may provide intraprocedural and postprocedural feedback during transarterial radioembolization (TARE). Furthermore, the potential to use higher resolution x-ray imaging techniques as opposed to nuclear medicine imaging suggests that significant improvements in the accuracy and precision of radiation dosimetry calculations could be realized for this type of therapy. This study investigates the absorbed dose kernel for novel radiopaque microspheres including contributions of both short and long-lived contaminant radionuclides while concurrently quantifying the self-shielding of the glass network. Monte Carlo simulations using EGSnrc were performed to determine the dose kernels for all monoenergetic electron emissions and all beta spectra for radionuclides reported in a neutron activation study of the microspheres. Simulations were benchmarked against an accepted 90 Y dose point kernel. Self-shielding was quantified for the microspheres by simulating an isotropically emitting, uniformly distributed source, in glass and in water. The ratio of the absorbed doses was scored as a function of distance from a microsphere. The absorbed dose kernel for the microspheres was calculated for (a) two bead formulations following (b) two different durations of neutron activation, at (c) various time points following activation. Self-shielding varies with time postremoval from the reactor. At early time points, it is less pronounced due to the higher energies of the emissions. It is on the order of 0.4-2.8% at a radial distance of 5.43 mm with increased size from 10 to 50 μm in diameter during the time that the microspheres would be administered to a patient. At long time points, self-shielding is more pronounced and can reach values in excess of 20% near the end of the range of the emissions. Absorbed dose kernels for 90 Y, 90m Y, 85m Sr, 85 Sr, 87m Sr, 89 Sr, 70 Ga, 72 Ga, and 31 Si are presented and used to determine an overall kernel for the

  13. Influence Function and Robust Variant of Kernel Canonical Correlation Analysis

    OpenAIRE

    Alam, Md. Ashad; Fukumizu, Kenji; Wang, Yu-Ping

    2017-01-01

    Many unsupervised kernel methods rely on the estimation of the kernel covariance operator (kernel CO) or kernel cross-covariance operator (kernel CCO). Both kernel CO and kernel CCO are sensitive to contaminated data, even when bounded positive definite kernels are used. To the best of our knowledge, there are few well-founded robust kernel methods for statistical unsupervised learning. In addition, while the influence function (IF) of an estimator can characterize its robustness, asymptotic ...

  14. Small-angle neutron scattering in materials science - an introduction

    International Nuclear Information System (INIS)

    Fratzl, P.

    1996-01-01

    The basic principles of the application of small-angle neutron scattering to materials research are summarized. The text focusses on the classical methods of data evaluation for isotropic and for anisotropic materials. Some examples of applications to the study of alloys, porous materials, composites and other complex materials are given. (author) 9 figs., 38 refs

  15. Solution of the radiative heat transfer equation with internal energy sources in a slab by the GFD method for anisotropic albedo

    Energy Technology Data Exchange (ETDEWEB)

    Azevedo, Fabio Souto de, E-mail: fabio.azevedo@ufrgs.b [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Inst. de Matematica; Sauter, Esequia, E-mail: esequia.sauter@canoas.ifrs.edu.b [Instituto Federal do Rio Grande do Sul (IFRS), Canoas, RS (Brazil); Thompson, Mark; Vilhena, Marco Tulio B., E-mail: mark.thompson@mat.ufrgs.b, E-mail: vilhena@mat.ufrgs.b [Universidade Federal do Rio Grande do Sul (UFRGS), Porto Alegre, RS (Brazil). Programa de Pos-Graduacao em Matematica Aplicada

    2011-07-01

    In this work we apply the Green Function Decomposition Method the radiative transport equation in a slab. The method consists in converting the radiative transport equation into a integral equation and projecting the integral operators involved into a finite dimensional space. This methodology does not involve an a priori discretization on the angular variable {mu}, requiring only that the kernel is numerically integrated on {mu}. Numerical results are provided for isotropic, linearly anisotropic, and Rayleigh scattering near the unitary albedo. (author)

  16. The Linux kernel as flexible product-line architecture

    NARCIS (Netherlands)

    M. de Jonge (Merijn)

    2002-01-01

    textabstractThe Linux kernel source tree is huge ($>$ 125 MB) and inflexible (because it is difficult to add new kernel components). We propose to make this architecture more flexible by assembling kernel source trees dynamically from individual kernel components. Users then, can select what

  17. Exploiting graph kernels for high performance biomedical relation extraction.

    Science.gov (United States)

    Panyam, Nagesh C; Verspoor, Karin; Cohn, Trevor; Ramamohanarao, Kotagiri

    2018-01-30

    Relation extraction from biomedical publications is an important task in the area of semantic mining of text. Kernel methods for supervised relation extraction are often preferred over manual feature engineering methods, when classifying highly ordered structures such as trees and graphs obtained from syntactic parsing of a sentence. Tree kernels such as the Subset Tree Kernel and Partial Tree Kernel have been shown to be effective for classifying constituency parse trees and basic dependency parse graphs of a sentence. Graph kernels such as the All Path Graph kernel (APG) and Approximate Subgraph Matching (ASM) kernel have been shown to be suitable for classifying general graphs with cycles, such as the enhanced dependency parse graph of a sentence. In this work, we present a high performance Chemical-Induced Disease (CID) relation extraction system. We present a comparative study of kernel methods for the CID task and also extend our study to the Protein-Protein Interaction (PPI) extraction task, an important biomedical relation extraction task. We discuss novel modifications to the ASM kernel to boost its performance and a method to apply graph kernels for extracting relations expressed in multiple sentences. Our system for CID relation extraction attains an F-score of 60%, without using external knowledge sources or task specific heuristic or rules. In comparison, the state of the art Chemical-Disease Relation Extraction system achieves an F-score of 56% using an ensemble of multiple machine learning methods, which is then boosted to 61% with a rule based system employing task specific post processing rules. For the CID task, graph kernels outperform tree kernels substantially, and the best performance is obtained with APG kernel that attains an F-score of 60%, followed by the ASM kernel at 57%. The performance difference between the ASM and APG kernels for CID sentence level relation extraction is not significant. In our evaluation of ASM for the PPI task, ASM

  18. Uniqueness in inverse elastic scattering with finitely many incident waves

    International Nuclear Information System (INIS)

    Elschner, Johannes; Yamamoto, Masahiro

    2009-01-01

    We consider the third and fourth exterior boundary value problems of linear isotropic elasticity and present uniqueness results for the corresponding inverse scattering problems with polyhedral-type obstacles and a finite number of incident plane elastic waves. Our approach is based on a reflection principle for the Navier equation. (orig.)

  19. Fracture analysis of a transversely isotropic high temperature superconductor strip based on real fundamental solutions

    International Nuclear Information System (INIS)

    Gao, Zhiwen; Zhou, Youhe

    2015-01-01

    Highlights: • We studied fracture problem in HTS based on real fundamental solutions. • When the thickness of HTS strip increases the SIF decrease. • A higher applied field leads to a larger stress intensity factor. • The greater the critical current density is, the smaller values of the SIF is. - Abstract: Real fundamental solution for fracture problem of transversely isotropic high temperature superconductor (HTS) strip is obtained. The superconductor E–J constitutive law is characterized by the Bean model where the critical current density is independent of the flux density. Fracture analysis is performed by the methods of singular integral equations which are solved numerically by Gauss–Lobatto–Chybeshev (GSL) collocation method. To guarantee a satisfactory accuracy, the convergence behavior of the kernel function is investigated. Numerical results of fracture parameters are obtained and the effects of the geometric characteristics, applied magnetic field and critical current density on the stress intensity factors (SIF) are discussed

  20. TEMPS, 1-Group Time-Dependent Pulsed Source Neutron Transport

    International Nuclear Information System (INIS)

    Ganapol, B.D.

    1988-01-01

    1 - Description of program or function: TEMPS numerically determines the scalar flux as given by the one-group neutron transport equation with a pulsed source in an infinite medium. Standard plane, point, and line sources are considered as well as a volume source in the negative half-space in plane geometry. The angular distribution of emitted neutrons can either be isotropic or mono-directional (beam) in plane geometry and isotropic in spherical and cylindrical geometry. A general anisotropic scattering Kernel represented in terms of Legendre polynomials can be accommodated with a time- dependent number of secondaries given by c(t)=c 0 (t/t 0 ) β , where β is greater than -1 and less than infinity. TEMPS is designed to provide the flux to a high degree of accuracy (4-5 digits) for use as a benchmark to which results from other numerical solutions or approximations can be compared. 2 - Method of solution: A semi-analytic Method of solution is followed. The main feature of this approach is that no discretization of the transport or scattering operators is employed. The numerical solution involves the evaluation of an analytical representation of the solution by standard numerical techniques. The transport equation is first reformulated in terms of multiple collisions with the flux represented by an infinite series of collisional components. Each component is then represented by an orthogonal Legendre series expansion in the variable x/t where the distance x and time t are measured in terms of mean free path and mean free time, respectively. The moments in the Legendre reconstruction are found from an algebraic recursion relation obtained from Legendre expansion in the direction variable mu. The multiple collision series is evaluated first to a prescribed relative error determined by the number of digits desired in the scalar flux. If the Legendre series fails to converge in the plane or point source case, an accelerative transformation, based on removing the

  1. GRIM : Leveraging GPUs for Kernel integrity monitoring

    NARCIS (Netherlands)

    Koromilas, Lazaros; Vasiliadis, Giorgos; Athanasopoulos, Ilias; Ioannidis, Sotiris

    2016-01-01

    Kernel rootkits can exploit an operating system and enable future accessibility and control, despite all recent advances in software protection. A promising defense mechanism against rootkits is Kernel Integrity Monitor (KIM) systems, which inspect the kernel text and data to discover any malicious

  2. 7 CFR 51.2296 - Three-fourths half kernel.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 2 2010-01-01 2010-01-01 false Three-fourths half kernel. 51.2296 Section 51.2296 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards...-fourths half kernel. Three-fourths half kernel means a portion of a half of a kernel which has more than...

  3. Examining Potential Boundary Bias Effects in Kernel Smoothing on Equating: An Introduction for the Adaptive and Epanechnikov Kernels.

    Science.gov (United States)

    Cid, Jaime A; von Davier, Alina A

    2015-05-01

    Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.

  4. Adaptive Kernel in Meshsize Boosting Algorithm in KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a meshsize boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  5. A kernel version of multivariate alteration detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2013-01-01

    Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations.......Based on the established methods kernel canonical correlation analysis and multivariate alteration detection we introduce a kernel version of multivariate alteration detection. A case study with SPOT HRV data shows that the kMAD variates focus on extreme change observations....

  6. Intra-connected three-dimensionally isotropic bulk negative index photonic metamaterial

    International Nuclear Information System (INIS)

    Guney, Durdu; Koschny, Thomas; Soukoulis, Costas

    2010-01-01

    Isotropic negative index metamaterials (NIMs) are highly desired, particularly for the realization of ultra-high resolution lenses. However, existing isotropic NIMs function only two-dimensionally and cannot be miniaturized beyond microwaves. Direct laser writing processes can be a paradigm shift toward the fabrication of three-dimensionally (3D) isotropic bulk optical metamaterials, but only at the expense of an additional design constraint, namely connectivity. Here, we demonstrate with a proof-of-principle design that the requirement connectivity does not preclude fully isotropic left-handed behavior. This is an important step towards the realization of bulk 3D isotropic NIMs at optical wavelengths.

  7. Implementing Kernel Methods Incrementally by Incremental Nonlinear Projection Trick.

    Science.gov (United States)

    Kwak, Nojun

    2016-05-20

    Recently, the nonlinear projection trick (NPT) was introduced enabling direct computation of coordinates of samples in a reproducing kernel Hilbert space. With NPT, any machine learning algorithm can be extended to a kernel version without relying on the so called kernel trick. However, NPT is inherently difficult to be implemented incrementally because an ever increasing kernel matrix should be treated as additional training samples are introduced. In this paper, an incremental version of the NPT (INPT) is proposed based on the observation that the centerization step in NPT is unnecessary. Because the proposed INPT does not change the coordinates of the old data, the coordinates obtained by INPT can directly be used in any incremental methods to implement a kernel version of the incremental methods. The effectiveness of the INPT is shown by applying it to implement incremental versions of kernel methods such as, kernel singular value decomposition, kernel principal component analysis, and kernel discriminant analysis which are utilized for problems of kernel matrix reconstruction, letter classification, and face image retrieval, respectively.

  8. Kernel learning at the first level of inference.

    Science.gov (United States)

    Cawley, Gavin C; Talbot, Nicola L C

    2014-05-01

    Kernel learning methods, whether Bayesian or frequentist, typically involve multiple levels of inference, with the coefficients of the kernel expansion being determined at the first level and the kernel and regularisation parameters carefully tuned at the second level, a process known as model selection. Model selection for kernel machines is commonly performed via optimisation of a suitable model selection criterion, often based on cross-validation or theoretical performance bounds. However, if there are a large number of kernel parameters, as for instance in the case of automatic relevance determination (ARD), there is a substantial risk of over-fitting the model selection criterion, resulting in poor generalisation performance. In this paper we investigate the possibility of learning the kernel, for the Least-Squares Support Vector Machine (LS-SVM) classifier, at the first level of inference, i.e. parameter optimisation. The kernel parameters and the coefficients of the kernel expansion are jointly optimised at the first level of inference, minimising a training criterion with an additional regularisation term acting on the kernel parameters. The key advantage of this approach is that the values of only two regularisation parameters need be determined in model selection, substantially alleviating the problem of over-fitting the model selection criterion. The benefits of this approach are demonstrated using a suite of synthetic and real-world binary classification benchmark problems, where kernel learning at the first level of inference is shown to be statistically superior to the conventional approach, improves on our previous work (Cawley and Talbot, 2007) and is competitive with Multiple Kernel Learning approaches, but with reduced computational expense. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Global Polynomial Kernel Hazard Estimation

    DEFF Research Database (Denmark)

    Hiabu, Munir; Miranda, Maria Dolores Martínez; Nielsen, Jens Perch

    2015-01-01

    This paper introduces a new bias reducing method for kernel hazard estimation. The method is called global polynomial adjustment (GPA). It is a global correction which is applicable to any kernel hazard estimator. The estimator works well from a theoretical point of view as it asymptotically redu...

  10. Scattering of linearly polarized Bessel beams by dielectric spheres

    Science.gov (United States)

    Shoorian, Hamed

    2017-09-01

    The scattering of a Linearly Polarized Bessel Beam (LPBB) by an isotropic and homogenous dielectric sphere is investigated. Using analytical relation between the cylindrical and the spherical vector wave functions, all the closed- form analytical expressions, in terms of spherical wave-functions expansions, are derived for the scattered field. It is shown that in the case of conical angle of incident Bessel beam is equal to zero, the Linearly Polarized Bessel Beam becomes a plane wave and its scattering coefficients become the same as the expansion coefficients of plane wave in Mie theory. The transverse Cartesian and spherical components of the electric field, scattered by a sphere are shown in the z-plane for different cases, moreover the intensity of the incident Bessel beam and the effects of its conical angle on the scattered field and the field inside the sphere are investigated. To quantitatively study the scattering phenomenon and the variations of the fields inside and outside of the sphere, the scattering and absorption efficiencies are obtained for the scattering of the linearly-polarized Bessel beam, and are compared with those of the plane wave scattering.

  11. Quantum tomography, phase-space observables and generalized Markov kernels

    International Nuclear Information System (INIS)

    Pellonpaeae, Juha-Pekka

    2009-01-01

    We construct a generalized Markov kernel which transforms the observable associated with the homodyne tomography into a covariant phase-space observable with a regular kernel state. Illustrative examples are given in the cases of a 'Schroedinger cat' kernel state and the Cahill-Glauber s-parametrized distributions. Also we consider an example of a kernel state when the generalized Markov kernel cannot be constructed.

  12. Single pass kernel k-means clustering method

    Indian Academy of Sciences (India)

    paper proposes a simple and faster version of the kernel k-means clustering ... It has been considered as an important tool ... On the other hand, kernel-based clustering methods, like kernel k-means clus- ..... able at the UCI machine learning repository (Murphy 1994). ... All the data sets have only numeric valued features.

  13. Relationship between attenuation coefficients and dose-spread kernels

    International Nuclear Information System (INIS)

    Boyer, A.L.

    1988-01-01

    Dose-spread kernels can be used to calculate the dose distribution in a photon beam by convolving the kernel with the primary fluence distribution. The theoretical relationships between various types and components of dose-spread kernels relative to photon attenuation coefficients are explored. These relations can be valuable as checks on the conservation of energy by dose-spread kernels calculated by analytic or Monte Carlo methods

  14. Crack Tip Creep Deformation Behavior in Transversely Isotropic Materials

    International Nuclear Information System (INIS)

    Ma, Young Wha; Yoon, Kee Bong

    2009-01-01

    Theoretical mechanics analysis and finite element simulation were performed to investigate creep deformation behavior at the crack tip of transversely isotropic materials under small scale creep (SCC) conditions. Mechanical behavior of material was assumed as an elastic-2 nd creep, which elastic modulus ( E ), Poisson's ratio (v ) and creep stress exponent ( n ) were isotropic and creep coefficient was only transversely isotropic. Based on the mechanics analysis for material behavior, a constitutive equation for transversely isotropic creep behavior was formulated and an equivalent creep coefficient was proposed under plain strain conditions. Creep deformation behavior at the crack tip was investigated through the finite element analysis. The results of the finite element analysis showed that creep deformation in transversely isotropic materials is dominant at the rear of the crack-tip. This result was more obvious when a load was applied to principal axis of anisotropy. Based on the results of the mechanics analysis and the finite element simulation, a corrected estimation scheme of the creep zone size was proposed in order to evaluate the creep deformation behavior at the crack tip of transversely isotropic creeping materials

  15. Lattice Boltzmann model for three-dimensional decaying homogeneous isotropic turbulence

    International Nuclear Information System (INIS)

    Xu Hui; Tao Wenquan; Zhang Yan

    2009-01-01

    We implement a lattice Boltzmann method (LBM) for decaying homogeneous isotropic turbulence based on an analogous Galerkin filter and focus on the fundamental statistical isotropic property. This regularized method is constructed based on orthogonal Hermite polynomial space. For decaying homogeneous isotropic turbulence, this regularized method can simulate the isotropic property very well. Numerical studies demonstrate that the novel regularized LBM is a promising approximation of turbulent fluid flows, which paves the way for coupling various turbulent models with LBM

  16. Mixture Density Mercer Kernels: A Method to Learn Kernels

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper presents a method of generating Mercer Kernels from an ensemble of probabilistic mixture models, where each mixture model is generated from a Bayesian...

  17. Local normal vector field formulation for periodic scattering problems formulated in the spectral domain

    NARCIS (Netherlands)

    van Beurden, M.C.; Setija, Irwan

    2017-01-01

    We present two adapted formulations, one tailored to isotropic media and one for general anisotropic media, of the normal vector field framework previously introduced to improve convergence near arbitrarily shaped material interfaces in spectral simulation methods for periodic scattering geometries.

  18. Integral equations with contrasting kernels

    Directory of Open Access Journals (Sweden)

    Theodore Burton

    2008-01-01

    Full Text Available In this paper we study integral equations of the form $x(t=a(t-\\int^t_0 C(t,sx(sds$ with sharply contrasting kernels typified by $C^*(t,s=\\ln (e+(t-s$ and $D^*(t,s=[1+(t-s]^{-1}$. The kernel assigns a weight to $x(s$ and these kernels have exactly opposite effects of weighting. Each type is well represented in the literature. Our first project is to show that for $a\\in L^2[0,\\infty$, then solutions are largely indistinguishable regardless of which kernel is used. This is a surprise and it leads us to study the essential differences. In fact, those differences become large as the magnitude of $a(t$ increases. The form of the kernel alone projects necessary conditions concerning the magnitude of $a(t$ which could result in bounded solutions. Thus, the next project is to determine how close we can come to proving that the necessary conditions are also sufficient. The third project is to show that solutions will be bounded for given conditions on $C$ regardless of whether $a$ is chosen large or small; this is important in real-world problems since we would like to have $a(t$ as the sum of a bounded, but badly behaved function, and a large well behaved function.

  19. Kernel methods in orthogonalization of multi- and hypervariate data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2009-01-01

    A kernel version of maximum autocorrelation factor (MAF) analysis is described very briefly and applied to change detection in remotely sensed hyperspectral image (HyMap) data. The kernel version is based on a dual formulation also termed Q-mode analysis in which the data enter into the analysis...... via inner products in the Gram matrix only. In the kernel version the inner products are replaced by inner products between nonlinear mappings into higher dimensional feature space of the original data. Via kernel substitution also known as the kernel trick these inner products between the mappings...... are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MAF analysis handle nonlinearities by implicitly transforming data into high (even infinite...

  20. Experimental observation of percolation-enhanced nonlinear light scattering from semicontinuous metal films

    Science.gov (United States)

    Breit, M.; Podolskiy, V. A.; Grésillon, S.; von Plessen, G.; Feldmann, J.; Rivoal, J. C.; Gadenne, P.; Sarychev, Andrey K.; Shalaev, Vladimir M.

    2001-09-01

    Strongly enhanced second-harmonic generation (SHG), which is characterized by a nearly isotropic intensity distribution, is observed for gold-glass films near the percolation threshold. The diffuselike SHG scattering, which can be thought of as nonlinear critical opalescence, is in sharp contrast with highly collimated linear reflection and transmission from these nanostructured semicontinuous metal films. Our observations, which can be explained by giant fluctuations of local nonlinear sources for SHG due to plasmon localization, verify recent predictions of percolation-enhanced nonlinear scattering.

  1. Transient electromagnetic scattering on anisotropic media

    International Nuclear Information System (INIS)

    Stewart, R.D.

    1990-01-01

    This dissertation treats the problem of transient scattering of obliquely incident electromagnetic plane waves on a stratified anisotropic dielectric slab. Scattering operators are derived for the reflective response of the medium. The internal fields are calculated. Wave splitting and invariant imbedding techniques are used. These techniques are first presented for fields normally incident on a stratified, isotropic dielectric medium. The techniques of wave splitting and invariant imbedding are applied to normally incident plane waves on an anisotropic medium. An integro-differential equation is derived for the reflective response and the direct and inverse scattering problems are discussed. These techniques are applied to the case of obliquely incident plane waves. The reflective response is derived and the direct and inverse problems discussed and compared to those for the normal incidence case. The internal fields are investigated for the oblique incidence via a Green's function approach. A numerical scheme is presented to calculate the Green's function. Finally, symmetry relations of the reflective response are discussed

  2. Sudden Relaminarization and Lifetimes in Forced Isotropic Turbulence.

    Science.gov (United States)

    Linkmann, Moritz F; Morozov, Alexander

    2015-09-25

    We demonstrate an unexpected connection between isotropic turbulence and wall-bounded shear flows. We perform direct numerical simulations of isotropic turbulence forced at large scales at moderate Reynolds numbers and observe sudden transitions from a chaotic dynamics to a spatially simple flow, analogous to the laminar state in wall bounded shear flows. We find that the survival probabilities of turbulence are exponential and the typical lifetimes increase superexponentially with the Reynolds number. Our results suggest that both isotropic turbulence and wall-bounded shear flows qualitatively share the same phase-space dynamics.

  3. Kernel based subspace projection of near infrared hyperspectral images of maize kernels

    DEFF Research Database (Denmark)

    Larsen, Rasmus; Arngren, Morten; Hansen, Per Waaben

    2009-01-01

    In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods ......- tor transform outperform the linear methods as well as kernel principal components in producing interesting projections of the data.......In this paper we present an exploratory analysis of hyper- spectral 900-1700 nm images of maize kernels. The imaging device is a line scanning hyper spectral camera using a broadband NIR illumi- nation. In order to explore the hyperspectral data we compare a series of subspace projection methods...... including principal component analysis and maximum autocorrelation factor analysis. The latter utilizes the fact that interesting phenomena in images exhibit spatial autocorrelation. However, linear projections often fail to grasp the underlying variability on the data. Therefore we propose to use so...

  4. Sparse Event Modeling with Hierarchical Bayesian Kernel Methods

    Science.gov (United States)

    2016-01-05

    SECURITY CLASSIFICATION OF: The research objective of this proposal was to develop a predictive Bayesian kernel approach to model count data based on...several predictive variables. Such an approach, which we refer to as the Poisson Bayesian kernel model, is able to model the rate of occurrence of... kernel methods made use of: (i) the Bayesian property of improving predictive accuracy as data are dynamically obtained, and (ii) the kernel function

  5. Quantum scattering at low energies

    DEFF Research Database (Denmark)

    Derezinski, Jan; Skibsted, Erik

    For a class of negative slowly decaying potentials, including with , we study the quantum mechanical scattering theory in the low-energy regime. Using modifiers of the Isozaki--Kitada type we show that scattering theory is well behaved on the {\\it whole} continuous spectrum of the Hamiltonian......, including the energy . We show that the --matrices are well-defined and strongly continuous down to the zero energy threshold. Similarly, we prove that the wave matrices and generalized eigenfunctions are norm continuous down to the zero energy if we use appropriate weighted spaces. These results are used...... from positive energies to the limiting energy . This change corresponds to the behaviour of the classical orbits. Under stronger conditions we extract the leading term of the asymptotics of the kernel of at its singularities; this leading term defines a Fourier integral operator in the sense...

  6. The Classification of Diabetes Mellitus Using Kernel k-means

    Science.gov (United States)

    Alamsyah, M.; Nafisah, Z.; Prayitno, E.; Afida, A. M.; Imah, E. M.

    2018-01-01

    Diabetes Mellitus is a metabolic disorder which is characterized by chronicle hypertensive glucose. Automatics detection of diabetes mellitus is still challenging. This study detected diabetes mellitus by using kernel k-Means algorithm. Kernel k-means is an algorithm which was developed from k-means algorithm. Kernel k-means used kernel learning that is able to handle non linear separable data; where it differs with a common k-means. The performance of kernel k-means in detecting diabetes mellitus is also compared with SOM algorithms. The experiment result shows that kernel k-means has good performance and a way much better than SOM.

  7. Isotropic Growth of Graphene toward Smoothing Stitching.

    Science.gov (United States)

    Zeng, Mengqi; Tan, Lifang; Wang, Lingxiang; Mendes, Rafael G; Qin, Zhihui; Huang, Yaxin; Zhang, Tao; Fang, Liwen; Zhang, Yanfeng; Yue, Shuanglin; Rümmeli, Mark H; Peng, Lianmao; Liu, Zhongfan; Chen, Shengli; Fu, Lei

    2016-07-26

    The quality of graphene grown via chemical vapor deposition still has very great disparity with its theoretical property due to the inevitable formation of grain boundaries. The design of single-crystal substrate with an anisotropic twofold symmetry for the unidirectional alignment of graphene seeds would be a promising way for eliminating the grain boundaries at the wafer scale. However, such a delicate process will be easily terminated by the obstruction of defects or impurities. Here we investigated the isotropic growth behavior of graphene single crystals via melting the growth substrate to obtain an amorphous isotropic surface, which will not offer any specific grain orientation induction or preponderant growth rate toward a certain direction in the graphene growth process. The as-obtained graphene grains are isotropically round with mixed edges that exhibit high activity. The orientation of adjacent grains can be easily self-adjusted to smoothly match each other over a liquid catalyst with facile atom delocalization due to the low rotation steric hindrance of the isotropic grains, thus achieving the smoothing stitching of the adjacent graphene. Therefore, the adverse effects of grain boundaries will be eliminated and the excellent transport performance of graphene will be more guaranteed. What is more, such an isotropic growth mode can be extended to other types of layered nanomaterials such as hexagonal boron nitride and transition metal chalcogenides for obtaining large-size intrinsic film with low defect.

  8. Evaluating the Application of Tissue-Specific Dose Kernels Instead of Water Dose Kernels in Internal Dosimetry : A Monte Carlo Study

    NARCIS (Netherlands)

    Moghadam, Maryam Khazaee; Asl, Alireza Kamali; Geramifar, Parham; Zaidi, Habib

    2016-01-01

    Purpose: The aim of this work is to evaluate the application of tissue-specific dose kernels instead of water dose kernels to improve the accuracy of patient-specific dosimetry by taking tissue heterogeneities into consideration. Materials and Methods: Tissue-specific dose point kernels (DPKs) and

  9. Parsimonious Wavelet Kernel Extreme Learning Machine

    Directory of Open Access Journals (Sweden)

    Wang Qin

    2015-11-01

    Full Text Available In this study, a parsimonious scheme for wavelet kernel extreme learning machine (named PWKELM was introduced by combining wavelet theory and a parsimonious algorithm into kernel extreme learning machine (KELM. In the wavelet analysis, bases that were localized in time and frequency to represent various signals effectively were used. Wavelet kernel extreme learning machine (WELM maximized its capability to capture the essential features in “frequency-rich” signals. The proposed parsimonious algorithm also incorporated significant wavelet kernel functions via iteration in virtue of Householder matrix, thus producing a sparse solution that eased the computational burden and improved numerical stability. The experimental results achieved from the synthetic dataset and a gas furnace instance demonstrated that the proposed PWKELM is efficient and feasible in terms of improving generalization accuracy and real time performance.

  10. Difference between standard and quasi-conformal BFKL kernels

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Papa, A.

    2012-01-01

    As it was recently shown, the colour singlet BFKL kernel, taken in Möbius representation in the space of impact parameters, can be written in quasi-conformal shape, which is unbelievably simple compared with the conventional form of the BFKL kernel in momentum space. It was also proved that the total kernel is completely defined by its Möbius representation. In this paper we calculated the difference between standard and quasi-conformal BFKL kernels in momentum space and discovered that it is rather simple. Therefore we come to the conclusion that the simplicity of the quasi-conformal kernel is caused mainly by using the impact parameter space.

  11. A laser optical method for detecting corn kernel defects

    Energy Technology Data Exchange (ETDEWEB)

    Gunasekaran, S.; Paulsen, M. R.; Shove, G. C.

    1984-01-01

    An opto-electronic instrument was developed to examine individual corn kernels and detect various kernel defects according to reflectance differences. A low power helium-neon (He-Ne) laser (632.8 nm, red light) was used as the light source in the instrument. Reflectance from good and defective parts of corn kernel surfaces differed by approximately 40%. Broken, chipped, and starch-cracked kernels were detected with nearly 100% accuracy; while surface-split kernels were detected with about 80% accuracy. (author)

  12. Kernel maximum autocorrelation factor and minimum noise fraction transformations

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2010-01-01

    in hyperspectral HyMap scanner data covering a small agricultural area, and 3) maize kernel inspection. In the cases shown, the kernel MAF/MNF transformation performs better than its linear counterpart as well as linear and kernel PCA. The leading kernel MAF/MNF variates seem to possess the ability to adapt...

  13. Calculation of the radiance distribution at the boundary of an isotropically scattering slab

    NARCIS (Netherlands)

    Doosje, M; Hoenders, B.J; Rinzema, K.

    The radiance arising from an anisotropically scattering illuminated stack of n slabs is calulated using the equation of radiative transfer. It appears to be unnecessary to calculate the radiance inside the material; including only the radiance at the boundary surfaces is sufficient to obtain the

  14. Monoenergetic neutral particle transport in an anisotropically scattering half-space

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Garth, J.C.; Woolf, S.

    1995-01-01

    During the past several years, a considerable effort has been underway to develop reliable analytical benchmark solutions to the one-speed transport equation in various geometries. The reader may well ask open-quotes whyclose quotes such a task has been undertaken, given the recent rapid advances in numerical transport theory. The simple answer is that reliable numerical solutions do not yet exist, and their development still represents a mathematical challenge. However, regardless of how mathematically challenging the development is, there are more compelling reasons for this effort which are rooted in the very fundamentals of science and technology. In particular, these solutions, which are highly accurate numerical evaluations of analytical solution representations, serve as open-quotes industry standardsclose quotes to which other more approximate methods or approximations can be compared. Thus analytical benchmarks are part of the process control and continuous improvement of numerical transport methods and are therefore integral components in TQM (Total Quality Management) as applied to transport methods development. With the above reasoning in mind, the authors begin the development and application of a new analytical solution and evaluation for a half-space featuring anisotropic scattering. This work is an extension of previous efforts in which isotropically scattering half-spaces were treated. The previous benchmarks were obtained most conveniently via a numerical Laplace transform inversion which could be applied in a straightforward manner to the case of isotropic scattering. The application of the Laplace transform method is problematical for anisotropic scattering and does not admit to the direct identification of the scalar flux from integral transport theory

  15. Identification of Fusarium damaged wheat kernels using image analysis

    Directory of Open Access Journals (Sweden)

    Ondřej Jirsa

    2011-01-01

    Full Text Available Visual evaluation of kernels damaged by Fusarium spp. pathogens is labour intensive and due to a subjective approach, it can lead to inconsistencies. Digital imaging technology combined with appropriate statistical methods can provide much faster and more accurate evaluation of the visually scabby kernels proportion. The aim of the present study was to develop a discrimination model to identify wheat kernels infected by Fusarium spp. using digital image analysis and statistical methods. Winter wheat kernels from field experiments were evaluated visually as healthy or damaged. Deoxynivalenol (DON content was determined in individual kernels using an ELISA method. Images of individual kernels were produced using a digital camera on dark background. Colour and shape descriptors were obtained by image analysis from the area representing the kernel. Healthy and damaged kernels differed significantly in DON content and kernel weight. Various combinations of individual shape and colour descriptors were examined during the development of the model using linear discriminant analysis. In addition to basic descriptors of the RGB colour model (red, green, blue, very good classification was also obtained using hue from the HSL colour model (hue, saturation, luminance. The accuracy of classification using the developed discrimination model based on RGBH descriptors was 85 %. The shape descriptors themselves were not specific enough to distinguish individual kernels.

  16. Digital signal processing with kernel methods

    CERN Document Server

    Rojo-Alvarez, José Luis; Muñoz-Marí, Jordi; Camps-Valls, Gustavo

    2018-01-01

    A realistic and comprehensive review of joint approaches to machine learning and signal processing algorithms, with application to communications, multimedia, and biomedical engineering systems Digital Signal Processing with Kernel Methods reviews the milestones in the mixing of classical digital signal processing models and advanced kernel machines statistical learning tools. It explains the fundamental concepts from both fields of machine learning and signal processing so that readers can quickly get up to speed in order to begin developing the concepts and application software in their own research. Digital Signal Processing with Kernel Methods provides a comprehensive overview of kernel methods in signal processing, without restriction to any application field. It also offers example applications and detailed benchmarking experiments with real and synthetic datasets throughout. Readers can find further worked examples with Matlab source code on a website developed by the authors. * Presents the necess...

  17. Higher-Order Hybrid Gaussian Kernel in Meshsize Boosting Algorithm

    African Journals Online (AJOL)

    In this paper, we shall use higher-order hybrid Gaussian kernel in a meshsize boosting algorithm in kernel density estimation. Bias reduction is guaranteed in this scheme like other existing schemes but uses the higher-order hybrid Gaussian kernel instead of the regular fixed kernels. A numerical verification of this scheme ...

  18. Adaptive Kernel In The Bootstrap Boosting Algorithm In KDE ...

    African Journals Online (AJOL)

    This paper proposes the use of adaptive kernel in a bootstrap boosting algorithm in kernel density estimation. The algorithm is a bias reduction scheme like other existing schemes but uses adaptive kernel instead of the regular fixed kernels. An empirical study for this scheme is conducted and the findings are comparatively ...

  19. Isotropic nuclear graphites; the effect of neutron irradiation

    International Nuclear Information System (INIS)

    Lore, J.; Buscaillon, A.; Mottet, P.; Micaud, G.

    1977-01-01

    Several isotropic graphites have been manufactured using different forming processes and fillers such as needle coke, regular coke, or pitch coke. Their properties are described in this paper. Specimens of these products have been irradiated in the fast reactor Rapsodie between 400 to 1400 0 C, at fluences up to 1,7.10 21 n.cm -2 PHI.FG. The results show an isotropic behavior under neutron irradiation, but the induced dimensional changes are higher than those of isotropic coke graphites although they are lower than those of conventional extruded graphites made with the same coke

  20. Process for the preparation of isotropic petroleum coke

    International Nuclear Information System (INIS)

    Kegler, W.H.; Huyser, M.E.

    1975-01-01

    A description is given of a process for preparing isotropic coke from oil residue charge. It includes blowing air into the residue until it reaches a softening temperature of around 49 to 116 deg C, the deferred coking of the residue having undergone blowing at a temperature of around 247 to 640 deg C, at a pressure between around 1.38x10 5 and 1.72x10 6 Pa, and the recovery of isotropic coke with a thermal expansion coefficient ratio under 1.5 approximately. The isotropic coke is used for preparing hexagonal graphite bars for nuclear reactor moderators [fr

  1. Emittance of a finite scattering medium with refractive index greater than unity

    International Nuclear Information System (INIS)

    Crosbie, A.L.

    1980-01-01

    Refractive index and scattering can significantly influence the transfer of radiation in a semitransparent medium such as water, glass, plastics, or ceramics. In a recent article (1979), the author presented exact numerical results for the emittance of a semiinfinite scattering medium with a refractive index greater than unity. The present investigation extends the analysis to a finite medium. The physical situation consists of a finite planar layer. The isothermal layer emits, absorbs, and isotropically scatters thermal radiation. It is characterized by single scattering albedo, optical thickness, refractive index, and temperature. A formula for the directional emittance is derived, the directional emittance being the emittance of the medium multiplied by the interface transmittance. The ratio of hemispherical to normal emittance is tabulated and discussed

  2. Windows Vista Kernel-Mode: Functions, Security Enhancements and Flaws

    Directory of Open Access Journals (Sweden)

    Mohammed D. ABDULMALIK

    2008-06-01

    Full Text Available Microsoft has made substantial enhancements to the kernel of the Microsoft Windows Vista operating system. Kernel improvements are significant because the kernel provides low-level operating system functions, including thread scheduling, interrupt and exception dispatching, multiprocessor synchronization, and a set of routines and basic objects.This paper describes some of the kernel security enhancements for 64-bit edition of Windows Vista. We also point out some weakness areas (flaws that can be attacked by malicious leading to compromising the kernel.

  3. Generalization Performance of Regularized Ranking With Multiscale Kernels.

    Science.gov (United States)

    Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin

    2016-05-01

    The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.

  4. Multineuron spike train analysis with R-convolution linear combination kernel.

    Science.gov (United States)

    Tezuka, Taro

    2018-06-01

    A spike train kernel provides an effective way of decoding information represented by a spike train. Some spike train kernels have been extended to multineuron spike trains, which are simultaneously recorded spike trains obtained from multiple neurons. However, most of these multineuron extensions were carried out in a kernel-specific manner. In this paper, a general framework is proposed for extending any single-neuron spike train kernel to multineuron spike trains, based on the R-convolution kernel. Special subclasses of the proposed R-convolution linear combination kernel are explored. These subclasses have a smaller number of parameters and make optimization tractable when the size of data is limited. The proposed kernel was evaluated using Gaussian process regression for multineuron spike trains recorded from an animal brain. It was compared with the sum kernel and the population Spikernel, which are existing ways of decoding multineuron spike trains using kernels. The results showed that the proposed approach performs better than these kernels and also other commonly used neural decoding methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Q-branch Raman scattering and modern kinetic thoery

    Energy Technology Data Exchange (ETDEWEB)

    Monchick, L. [The Johns Hopkins Univ., Laurel, MD (United States)

    1993-12-01

    The program is an extension of previous APL work whose general aim was to calculate line shapes of nearly resonant isolated line transitions with solutions of a popular quantum kinetic equation-the Waldmann-Snider equation-using well known advanced solution techniques developed for the classical Boltzmann equation. The advanced techniques explored have been a BGK type approximation, which is termed the Generalized Hess Method (GHM), and conversion of the collision operator to a block diagonal matrix of symmetric collision kernels which then can be approximated by discrete ordinate methods. The latter method, which is termed the Collision Kernel method (CC), is capable of the highest accuracy and has been used quite successfully for Q-branch Raman scattering. The GHM method, not quite as accurate, is applicable over a wider range of pressures and has proven quite useful.

  6. An analysis of 1-D smoothed particle hydrodynamics kernels

    International Nuclear Information System (INIS)

    Fulk, D.A.; Quinn, D.W.

    1996-01-01

    In this paper, the smoothed particle hydrodynamics (SPH) kernel is analyzed, resulting in measures of merit for one-dimensional SPH. Various methods of obtaining an objective measure of the quality and accuracy of the SPH kernel are addressed. Since the kernel is the key element in the SPH methodology, this should be of primary concern to any user of SPH. The results of this work are two measures of merit, one for smooth data and one near shocks. The measure of merit for smooth data is shown to be quite accurate and a useful delineator of better and poorer kernels. The measure of merit for non-smooth data is not quite as accurate, but results indicate the kernel is much less important for these types of problems. In addition to the theory, 20 kernels are analyzed using the measure of merit demonstrating the general usefulness of the measure of merit and the individual kernels. In general, it was decided that bell-shaped kernels perform better than other shapes. 12 refs., 16 figs., 7 tabs

  7. Putting Priors in Mixture Density Mercer Kernels

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2004-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly infinite dimensional feature space. We describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using predefined kernels. These data adaptive kernels can en- code prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS). The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains template for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic- algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code. The results show that the Mixture Density Mercer-Kernel described here outperforms tree-based classification in distinguishing high-redshift galaxies from low- redshift galaxies by approximately 16% on test data, bagged trees by approximately 7%, and bagged trees built on a much larger sample of data by approximately 2%.

  8. POLARIZATION IMAGING AND SCATTERING MODEL OF CANCEROUS LIVER TISSUES

    Directory of Open Access Journals (Sweden)

    DONGZHI LI

    2013-07-01

    Full Text Available We apply different polarization imaging techniques for cancerous liver tissues, and compare the relative contrasts for difference polarization imaging (DPI, degree of polarization imaging (DOPI and rotating linear polarization imaging (RLPI. Experimental results show that a number of polarization imaging parameters are capable of differentiating cancerous cells in isotropic liver tissues. To analyze the contrast mechanism of the cancer-sensitive polarization imaging parameters, we propose a scattering model containing two types of spherical scatterers and carry on Monte Carlo simulations based on this bi-component model. Both the experimental and Monte Carlo simulated results show that the RLPI technique can provide a good imaging contrast of cancerous tissues. The bi-component scattering model provides a useful tool to analyze the contrast mechanism of polarization imaging of cancerous tissues.

  9. NLO corrections to the Kernel of the BKP-equations

    Energy Technology Data Exchange (ETDEWEB)

    Bartels, J. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Fadin, V.S. [Budker Institute of Nuclear Physics, Novosibirsk (Russian Federation); Novosibirskij Gosudarstvennyj Univ., Novosibirsk (Russian Federation); Lipatov, L.N. [Hamburg Univ. (Germany). 2. Inst. fuer Theoretische Physik; Petersburg Nuclear Physics Institute, Gatchina, St. Petersburg (Russian Federation); Vacca, G.P. [INFN, Sezione di Bologna (Italy)

    2012-10-02

    We present results for the NLO kernel of the BKP equations for composite states of three reggeized gluons in the Odderon channel, both in QCD and in N=4 SYM. The NLO kernel consists of the NLO BFKL kernel in the color octet representation and the connected 3{yields}3 kernel, computed in the tree approximation.

  10. A Fast and Simple Graph Kernel for RDF

    NARCIS (Netherlands)

    de Vries, G.K.D.; de Rooij, S.

    2013-01-01

    In this paper we study a graph kernel for RDF based on constructing a tree for each instance and counting the number of paths in that tree. In our experiments this kernel shows comparable classification performance to the previously introduced intersection subtree kernel, but is significantly faster

  11. Neutron-scattering cross section of the S=1/2 Heisenberg triangular antiferromagnet

    DEFF Research Database (Denmark)

    Lefmann, K.; Hedegård, P.

    1994-01-01

    In this paper we use a Schwinger-boson mean-field approach to calculate the neutron-scattering cross section from the S = 1/2 antiferromagnet with nearest-neighbor isotropic Heisenberg interaction on a two-dimensional triangular lattice. We investigate two solutions for T = 0: (i) a state with lo...... no elastic, but a set of broader dispersive spin excitations around kappa almost-equal-to (1/2, 0) and around kappa almost-equal-to (1/3, 1/3) for omega/E(g) = 2.5-4. It should thus be possible to distinguish these two states in a neutron-scattering experiment.......In this paper we use a Schwinger-boson mean-field approach to calculate the neutron-scattering cross section from the S = 1/2 antiferromagnet with nearest-neighbor isotropic Heisenberg interaction on a two-dimensional triangular lattice. We investigate two solutions for T = 0: (i) a state with long......-range order resembling the Neel state and (ii) a resonating valence bond or ''spin liquid'' state with an energy gap, E(g) almost-equal-to 0.17J, for the elementary excitations (spinons). For solution (ii) the neutron cross section shows Bragg rods at kappa = K = (1/3, 1/3), whereas solution (ii) shows...

  12. Constraints on light WIMP candidates from the isotropic diffuse gamma-ray emission

    International Nuclear Information System (INIS)

    Arina, Chiara; Tytgat, Michel H.G.

    2011-01-01

    Motivated by the measurements reported by direct detection experiments, most notably DAMA, CDMS-II, CoGeNT and Xenon10/100, we study further the constraints that might be set on some light dark matter candidates, M DM ∼ few GeV, using the Fermi-LAT data on the isotropic gamma-ray diffuse emission. In particular, we consider a Dirac fermion singlet interacting through a new Z' gauge boson, and a scalar singlet S interacting through the Higgs portal. Both candidates are WIMP (Weakly Interacting Massive Particles), i.e. they have an annihilation cross-section in the pbarn range. Also they may both have a spin-independent elastic cross section on nucleons in the range required by direct detection experiments. Although being generic WIMP candidates, because they have different interactions with Standard Model particles, their phenomenology regarding the isotropic diffuse gamma-ray emission is quite distinct. In the case of the scalar singlet, the one-to-one correspondence between its annihilation cross-section and its spin-independent elastic scattering cross-section permits to express the constraints from the Fermi-LAT data in the direct detection exclusion plot, σ n 0 −M DM . Depending on the astrophysics, we argue that it is possible to exclude the singlet scalar dark matter candidate at 95% confidence level. The constraints on the Dirac singlet interacting through a Z' are comparatively weaker

  13. An SVM model with hybrid kernels for hydrological time series

    Science.gov (United States)

    Wang, C.; Wang, H.; Zhao, X.; Xie, Q.

    2017-12-01

    Support Vector Machine (SVM) models have been widely applied to the forecast of climate/weather and its impact on other environmental variables such as hydrologic response to climate/weather. When using SVM, the choice of the kernel function plays the key role. Conventional SVM models mostly use one single type of kernel function, e.g., radial basis kernel function. Provided that there are several featured kernel functions available, each having its own advantages and drawbacks, a combination of these kernel functions may give more flexibility and robustness to SVM approach, making it suitable for a wide range of application scenarios. This paper presents such a linear combination of radial basis kernel and polynomial kernel for the forecast of monthly flowrate in two gaging stations using SVM approach. The results indicate significant improvement in the accuracy of predicted series compared to the approach with either individual kernel function, thus demonstrating the feasibility and advantages of such hybrid kernel approach for SVM applications.

  14. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  15. Translucent Radiosity: Efficiently Combining Diffuse Inter-Reflection and Subsurface Scattering.

    Science.gov (United States)

    Sheng, Yu; Shi, Yulong; Wang, Lili; Narasimhan, Srinivasa G

    2014-07-01

    It is hard to efficiently model the light transport in scenes with translucent objects for interactive applications. The inter-reflection between objects and their environments and the subsurface scattering through the materials intertwine to produce visual effects like color bleeding, light glows, and soft shading. Monte-Carlo based approaches have demonstrated impressive results but are computationally expensive, and faster approaches model either only inter-reflection or only subsurface scattering. In this paper, we present a simple analytic model that combines diffuse inter-reflection and isotropic subsurface scattering. Our approach extends the classical work in radiosity by including a subsurface scattering matrix that operates in conjunction with the traditional form factor matrix. This subsurface scattering matrix can be constructed using analytic, measurement-based or simulation-based models and can capture both homogeneous and heterogeneous translucencies. Using a fast iterative solution to radiosity, we demonstrate scene relighting and dynamically varying object translucencies at near interactive rates.

  16. Reduced multiple empirical kernel learning machine.

    Science.gov (United States)

    Wang, Zhe; Lu, MingZhe; Gao, Daqi

    2015-02-01

    Multiple kernel learning (MKL) is demonstrated to be flexible and effective in depicting heterogeneous data sources since MKL can introduce multiple kernels rather than a single fixed kernel into applications. However, MKL would get a high time and space complexity in contrast to single kernel learning, which is not expected in real-world applications. Meanwhile, it is known that the kernel mapping ways of MKL generally have two forms including implicit kernel mapping and empirical kernel mapping (EKM), where the latter is less attracted. In this paper, we focus on the MKL with the EKM, and propose a reduced multiple empirical kernel learning machine named RMEKLM for short. To the best of our knowledge, it is the first to reduce both time and space complexity of the MKL with EKM. Different from the existing MKL, the proposed RMEKLM adopts the Gauss Elimination technique to extract a set of feature vectors, which is validated that doing so does not lose much information of the original feature space. Then RMEKLM adopts the extracted feature vectors to span a reduced orthonormal subspace of the feature space, which is visualized in terms of the geometry structure. It can be demonstrated that the spanned subspace is isomorphic to the original feature space, which means that the dot product of two vectors in the original feature space is equal to that of the two corresponding vectors in the generated orthonormal subspace. More importantly, the proposed RMEKLM brings a simpler computation and meanwhile needs a less storage space, especially in the processing of testing. Finally, the experimental results show that RMEKLM owns a much efficient and effective performance in terms of both complexity and classification. The contributions of this paper can be given as follows: (1) by mapping the input space into an orthonormal subspace, the geometry of the generated subspace is visualized; (2) this paper first reduces both the time and space complexity of the EKM-based MKL; (3

  17. Kernel principal component analysis for change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Morton, J.C.

    2008-01-01

    region acquired at two different time points. If change over time does not dominate the scene, the projection of the original two bands onto the second eigenvector will show change over time. In this paper a kernel version of PCA is used to carry out the analysis. Unlike ordinary PCA, kernel PCA...... with a Gaussian kernel successfully finds the change observations in a case where nonlinearities are introduced artificially....

  18. Minimally coupled N-particle scattering integral equations

    International Nuclear Information System (INIS)

    Kowalski, K.L.

    1977-01-01

    A concise formalism is developed which permits the efficient representation and generalization of several known techniques for deriving connected-kernel N-particle scattering integral equations. The methods of Kouri, Levin, and Tobocman and Bencze and Redish which lead to minimally coupled integral equations are of special interest. The introduction of channel coupling arrays is characterized in a general manner and the common base of this technique and that of the so-called channel coupling scheme is clarified. It is found that in the Bencze-Redish formalism a particular coupling array has a crucial function but one different from that of the arrays employed by Kouri, Levin, and Tobocman. The apparent dependence of the proof of the minimality of the Bencze-Redish integral equations upon the form of the inhomogeneous term in these equations is eliminated. This is achieved by an investigation of the full (nonminimal) Bencze-Redish kernel. It is shown that the second power of this operator is connected, a result which is needed for the full applicability of the Bencze-Redish formalism. This is used to establish the relationship between the existence of solutions to the homogeneous form of the minimal equations and eigenvalues of the full Bencze-Redish kernel

  19. Solution of the neutron transport problem with anisotropic scattering in cylindrical geometry by the decomposition method

    International Nuclear Information System (INIS)

    Goncalves, G.A.; Bogado Leite, S.Q.; Vilhena, M.T. de

    2009-01-01

    An analytical solution has been obtained for the one-speed stationary neutron transport problem, in an infinitely long cylinder with anisotropic scattering by the decomposition method. Series expansions of the angular flux distribution are proposed in terms of suitably constructed functions, recursively obtainable from the isotropic solution, to take into account anisotropy. As for the isotropic problem, an accurate closed-form solution was chosen for the problem with internal source and constant incident radiation, obtained from an integral transformation technique and the F N method

  20. Effect of the single-scattering phase function on light transmission through disordered media with large inhomogeneities

    International Nuclear Information System (INIS)

    Marinyuk, V V; Sheberstov, S V

    2017-01-01

    We calculate the total transmission coefficient (transmittance) of a disordered medium with large (compared to the light wavelength) inhomogeneities. To model highly forward scattering in the medium we take advantage of the Gegenbauer kernel phase function. In a subdiffusion thickness range, the transmittance is shown to be sensitive to the specific form of the single-scattering phase function. The effect reveals itself at grazing angles of incidence and originates from small-angle multiple scattering of light. Our results are in a good agreement with numerical solutions to the radiative transfer equation. (paper)

  1. Texture of low temperature isotropic pyrocarbons

    International Nuclear Information System (INIS)

    Pelissier, Joseph; Lombard, Louis.

    1976-01-01

    Isotropic pyrocarbon deposited on fuel particles was studied by transmission electron microscopy in order to determine its texture. The material consists of an agglomerate of spherical growth features similar to those of carbon black. The spherical growth features are formed from the cristallites of turbostratic carbon and the distribution gives an isotropic structure. Neutron irradiation modifies the morphology of the pyrocarbon. The spherical growth features are deformed and the coating becomes strongly anisotropic. The transformation leads to the rupture of the coating caused by strong irradiation doses [fr

  2. Enhanced gluten properties in soft kernel durum wheat

    Science.gov (United States)

    Soft kernel durum wheat is a relatively recent development (Morris et al. 2011 Crop Sci. 51:114). The soft kernel trait exerts profound effects on kernel texture, flour milling including break flour yield, milling energy, and starch damage, and dough water absorption (DWA). With the caveat of reduce...

  3. 7 CFR 981.61 - Redetermination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Redetermination of kernel weight. 981.61 Section 981... GROWN IN CALIFORNIA Order Regulating Handling Volume Regulation § 981.61 Redetermination of kernel weight. The Board, on the basis of reports by handlers, shall redetermine the kernel weight of almonds...

  4. Adaptive metric kernel regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    2000-01-01

    Kernel smoothing is a widely used non-parametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this contribution, we propose an algorithm that adapts the input metric used in multivariate...... regression by minimising a cross-validation estimate of the generalisation error. This allows to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms...

  5. Consistent Estimation of Pricing Kernels from Noisy Price Data

    OpenAIRE

    Vladislav Kargin

    2003-01-01

    If pricing kernels are assumed non-negative then the inverse problem of finding the pricing kernel is well-posed. The constrained least squares method provides a consistent estimate of the pricing kernel. When the data are limited, a new method is suggested: relaxed maximization of the relative entropy. This estimator is also consistent. Keywords: $\\epsilon$-entropy, non-parametric estimation, pricing kernel, inverse problems.

  6. Stable Kernel Representations as Nonlinear Left Coprime Factorizations

    NARCIS (Netherlands)

    Paice, A.D.B.; Schaft, A.J. van der

    1994-01-01

    A representation of nonlinear systems based on the idea of representing the input-output pairs of the system as elements of the kernel of a stable operator has been recently introduced. This has been denoted the kernel representation of the system. In this paper it is demonstrated that the kernel

  7. 7 CFR 981.60 - Determination of kernel weight.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 8 2010-01-01 2010-01-01 false Determination of kernel weight. 981.60 Section 981.60... Regulating Handling Volume Regulation § 981.60 Determination of kernel weight. (a) Almonds for which settlement is made on kernel weight. All lots of almonds, whether shelled or unshelled, for which settlement...

  8. End-use quality of soft kernel durum wheat

    Science.gov (United States)

    Kernel texture is a major determinant of end-use quality of wheat. Durum wheat has very hard kernels. We developed soft kernel durum wheat via Ph1b-mediated homoeologous recombination. The Hardness locus was transferred from Chinese Spring to Svevo durum wheat via back-crossing. ‘Soft Svevo’ had SKC...

  9. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Ling-Yu Duan

    2010-01-01

    Full Text Available Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  10. Per-Sample Multiple Kernel Approach for Visual Concept Learning

    Directory of Open Access Journals (Sweden)

    Tian Yonghong

    2010-01-01

    Full Text Available Abstract Learning visual concepts from images is an important yet challenging problem in computer vision and multimedia research areas. Multiple kernel learning (MKL methods have shown great advantages in visual concept learning. As a visual concept often exhibits great appearance variance, a canonical MKL approach may not generate satisfactory results when a uniform kernel combination is applied over the input space. In this paper, we propose a per-sample multiple kernel learning (PS-MKL approach to take into account intraclass diversity for improving discrimination. PS-MKL determines sample-wise kernel weights according to kernel functions and training samples. Kernel weights as well as kernel-based classifiers are jointly learned. For efficient learning, PS-MKL employs a sample selection strategy. Extensive experiments are carried out over three benchmarking datasets of different characteristics including Caltech101, WikipediaMM, and Pascal VOC'07. PS-MKL has achieved encouraging performance, comparable to the state of the art, which has outperformed a canonical MKL.

  11. A theoretical insight into H accumulation and bubble formation by applying isotropic strain on the W-H system under a fusion environment

    Science.gov (United States)

    Han, Quan-Fu; Liu, Yue-Lin; Zhang, Ying; Ding, Fang; Lu, Guang-Hong

    2018-04-01

    The solubility and bubble formation of hydrogen (H) in tungsten (W) are crucial factors for the application of W as a plasma-facing component under a fusion environment, but the data and mechanism are presently scattered, indicating some important factors might be neglected. High-energy neutron-irradiated W inevitably causes a local strain, which may change the solubility of H in W. Here, we performed first-principles calculations to predict the H solution behaviors under isotropic strain combined with temperature effect in W and found that the H solubility in interstitial lattice can be promoted/impeded by isotropic tensile/compressive strain over the temperature range 300-1800 K. The calculated H solubility presents good agreement with the experiment. Together, our previous results of anisotropic strain, except for isotropic compression, both isotropic tension and anisotropic tension/compression enhance H solution so as to reveal an important physical implication for H accumulation and bubble formation in W: strain can enhance H solubility, resulting in the preliminary nucleation of H bubble that further causes the local strain of W lattice around H bubble, which in turn improves the H solubility at the strained region that promotes continuous growth of the H bubble via a chain-reaction effect in W. This result can also interpret the H bubble formation even if no radiation damage is produced in W exposed to low-energy H plasma.

  12. Discrete non-parametric kernel estimation for global sensitivity analysis

    International Nuclear Information System (INIS)

    Senga Kiessé, Tristan; Ventura, Anne

    2016-01-01

    This work investigates the discrete kernel approach for evaluating the contribution of the variance of discrete input variables to the variance of model output, via analysis of variance (ANOVA) decomposition. Until recently only the continuous kernel approach has been applied as a metamodeling approach within sensitivity analysis framework, for both discrete and continuous input variables. Now the discrete kernel estimation is known to be suitable for smoothing discrete functions. We present a discrete non-parametric kernel estimator of ANOVA decomposition of a given model. An estimator of sensitivity indices is also presented with its asymtotic convergence rate. Some simulations on a test function analysis and a real case study from agricultural have shown that the discrete kernel approach outperforms the continuous kernel one for evaluating the contribution of moderate or most influential discrete parameters to the model output. - Highlights: • We study a discrete kernel estimation for sensitivity analysis of a model. • A discrete kernel estimator of ANOVA decomposition of the model is presented. • Sensitivity indices are calculated for discrete input parameters. • An estimator of sensitivity indices is also presented with its convergence rate. • An application is realized for improving the reliability of environmental models.

  13. Bound states and scattering in four-body systems

    International Nuclear Information System (INIS)

    Narodetsky, I.M.

    1979-01-01

    It is the purpose of this review to provide the clear and elementary introduction in the integral equation method and to demonstrate explicitely its usefulness for the physical applications. The existing results concerning the application of the integral equation technique for the four-nucleon bound states and scattering are reviewed.The treatment is based on the quasiparticle approach that permits the simple interpretation of the equations in terms of quasiparticle scattering. The mathematical basis for the quasiparticle approach is the Hilbert-Schmidt theorem of the Fredholm integral equation theory. This paper contains the detailed discussion of the Hilbert-Schmidt expansion as applied to the 2-particle amplitudes and to the 3 + 1 and 2 + 2 amplitudes which are the kernels of the four-body equations. The review contains essentially the discussion of the four-body quasiparticle equations and results obtained for bound states and scattering

  14. Deep Restricted Kernel Machines Using Conjugate Feature Duality.

    Science.gov (United States)

    Suykens, Johan A K

    2017-08-01

    The aim of this letter is to propose a theory of deep restricted kernel machines offering new foundations for deep learning with kernel machines. From the viewpoint of deep learning, it is partially related to restricted Boltzmann machines, which are characterized by visible and hidden units in a bipartite graph without hidden-to-hidden connections and deep learning extensions as deep belief networks and deep Boltzmann machines. From the viewpoint of kernel machines, it includes least squares support vector machines for classification and regression, kernel principal component analysis (PCA), matrix singular value decomposition, and Parzen-type models. A key element is to first characterize these kernel machines in terms of so-called conjugate feature duality, yielding a representation with visible and hidden units. It is shown how this is related to the energy form in restricted Boltzmann machines, with continuous variables in a nonprobabilistic setting. In this new framework of so-called restricted kernel machine (RKM) representations, the dual variables correspond to hidden features. Deep RKM are obtained by coupling the RKMs. The method is illustrated for deep RKM, consisting of three levels with a least squares support vector machine regression level and two kernel PCA levels. In its primal form also deep feedforward neural networks can be trained within this framework.

  15. Improved modeling of clinical data with kernel methods.

    Science.gov (United States)

    Daemen, Anneleen; Timmerman, Dirk; Van den Bosch, Thierry; Bottomley, Cecilia; Kirk, Emma; Van Holsbeke, Caroline; Valentin, Lil; Bourne, Tom; De Moor, Bart

    2012-02-01

    Despite the rise of high-throughput technologies, clinical data such as age, gender and medical history guide clinical management for most diseases and examinations. To improve clinical management, available patient information should be fully exploited. This requires appropriate modeling of relevant parameters. When kernel methods are used, traditional kernel functions such as the linear kernel are often applied to the set of clinical parameters. These kernel functions, however, have their disadvantages due to the specific characteristics of clinical data, being a mix of variable types with each variable its own range. We propose a new kernel function specifically adapted to the characteristics of clinical data. The clinical kernel function provides a better representation of patients' similarity by equalizing the influence of all variables and taking into account the range r of the variables. Moreover, it is robust with respect to changes in r. Incorporated in a least squares support vector machine, the new kernel function results in significantly improved diagnosis, prognosis and prediction of therapy response. This is illustrated on four clinical data sets within gynecology, with an average increase in test area under the ROC curve (AUC) of 0.023, 0.021, 0.122 and 0.019, respectively. Moreover, when combining clinical parameters and expression data in three case studies on breast cancer, results improved overall with use of the new kernel function and when considering both data types in a weighted fashion, with a larger weight assigned to the clinical parameters. The increase in AUC with respect to a standard kernel function and/or unweighted data combination was maximum 0.127, 0.042 and 0.118 for the three case studies. For clinical data consisting of variables of different types, the proposed kernel function--which takes into account the type and range of each variable--has shown to be a better alternative for linear and non-linear classification problems

  16. Linear and kernel methods for multi- and hypervariate change detection

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Canty, Morton J.

    2010-01-01

    . Principal component analysis (PCA) as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (which are nonlinear), may further enhance change signals relative to no-change background. The kernel versions are based on a dual...... formulation, also termed Q-mode analysis, in which the data enter into the analysis via inner products in the Gram matrix only. In the kernel version the inner products of the original data are replaced by inner products between nonlinear mappings into higher dimensional feature space. Via kernel substitution......, also known as the kernel trick, these inner products between the mappings are in turn replaced by a kernel function and all quantities needed in the analysis are expressed in terms of the kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel principal component...

  17. Kernel based orthogonalization for change detection in hyperspectral images

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    function and all quantities needed in the analysis are expressed in terms of this kernel function. This means that we need not know the nonlinear mappings explicitly. Kernel PCA and MNF analyses handle nonlinearities by implicitly transforming data into high (even infinite) dimensional feature space via...... analysis all 126 spectral bands of the HyMap are included. Changes on the ground are most likely due to harvest having taken place between the two acquisitions and solar effects (both solar elevation and azimuth have changed). Both types of kernel analysis emphasize change and unlike kernel PCA, kernel MNF...

  18. Semi-Supervised Kernel PCA

    DEFF Research Database (Denmark)

    Walder, Christian; Henao, Ricardo; Mørup, Morten

    We present three generalisations of Kernel Principal Components Analysis (KPCA) which incorporate knowledge of the class labels of a subset of the data points. The first, MV-KPCA, penalises within class variances similar to Fisher discriminant analysis. The second, LSKPCA is a hybrid of least...... squares regression and kernel PCA. The final LR-KPCA is an iteratively reweighted version of the previous which achieves a sigmoid loss function on the labeled points. We provide a theoretical risk bound as well as illustrative experiments on real and toy data sets....

  19. Adaptive Metric Kernel Regression

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Kernel smoothing is a widely used nonparametric pattern recognition technique. By nature, it suffers from the curse of dimensionality and is usually difficult to apply to high input dimensions. In this paper, we propose an algorithm that adapts the input metric used in multivariate regression...... by minimising a cross-validation estimate of the generalisation error. This allows one to automatically adjust the importance of different dimensions. The improvement in terms of modelling performance is illustrated on a variable selection task where the adaptive metric kernel clearly outperforms the standard...

  20. 21 CFR 176.350 - Tamarind seed kernel powder.

    Science.gov (United States)

    2010-04-01

    ... 21 Food and Drugs 3 2010-04-01 2009-04-01 true Tamarind seed kernel powder. 176.350 Section 176... Substances for Use Only as Components of Paper and Paperboard § 176.350 Tamarind seed kernel powder. Tamarind seed kernel powder may be safely used as a component of articles intended for use in producing...

  1. Dense Medium Machine Processing Method for Palm Kernel/ Shell ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Cracked palm kernel is a mixture of kernels, broken shells, dusts and other impurities. In ... machine processing method using dense medium, a separator, a shell collector and a kernel .... efficiency, ease of maintenance and uniformity of.

  2. High-throughput isotropic mapping of whole mouse brain using multi-view light-sheet microscopy

    Science.gov (United States)

    Nie, Jun; Li, Yusha; Zhao, Fang; Ping, Junyu; Liu, Sa; Yu, Tingting; Zhu, Dan; Fei, Peng

    2018-02-01

    Light-sheet fluorescence microscopy (LSFM) uses an additional laser-sheet to illuminate selective planes of the sample, thereby enabling three-dimensional imaging at high spatial-temporal resolution. These advantages make LSFM a promising tool for high-quality brain visualization. However, even by the use of LSFM, the spatial resolution remains insufficient to resolve the neural structures across a mesoscale whole mouse brain in three dimensions. At the same time, the thick-tissue scattering prevents a clear observation from the deep of brain. Here we use multi-view LSFM strategy to solve this challenge, surpassing the resolution limit of standard light-sheet microscope under a large field-of-view (FOV). As demonstrated by the imaging of optically-cleared mouse brain labelled with thy1-GFP, we achieve a brain-wide, isotropic cellular resolution of 3μm. Besides the resolution enhancement, multi-view braining imaging can also recover complete signals from deep tissue scattering and attenuation. The identification of long distance neural projections across encephalic regions can be identified and annotated as a result.

  3. Isotropic compression of cohesive-frictional particles with rolling resistance

    NARCIS (Netherlands)

    Luding, Stefan; Benz, Thomas; Nordal, Steinar

    2010-01-01

    Cohesive-frictional and rough powders are the subject of this study. The behavior under isotropic compression is examined for different material properties involving Coulomb friction, rolling-resistance and contact-adhesion. Under isotropic compression, the density continuously increases according

  4. Multivariate and semiparametric kernel regression

    OpenAIRE

    Härdle, Wolfgang; Müller, Marlene

    1997-01-01

    The paper gives an introduction to theory and application of multivariate and semiparametric kernel smoothing. Multivariate nonparametric density estimation is an often used pilot tool for examining the structure of data. Regression smoothing helps in investigating the association between covariates and responses. We concentrate on kernel smoothing using local polynomial fitting which includes the Nadaraya-Watson estimator. Some theory on the asymptotic behavior and bandwidth selection is pro...

  5. Notes on the gamma kernel

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole E.

    The density function of the gamma distribution is used as shift kernel in Brownian semistationary processes modelling the timewise behaviour of the velocity in turbulent regimes. This report presents exact and asymptotic properties of the second order structure function under such a model......, and relates these to results of von Karmann and Horwath. But first it is shown that the gamma kernel is interpretable as a Green’s function....

  6. Point spread function due to multiple scattering of light in the atmosphere

    International Nuclear Information System (INIS)

    Pękala, J.; Wilczyński, H.

    2013-01-01

    The atmospheric scattering of light has a significant influence on the results of optical observations of air showers. It causes attenuation of direct light from the shower, but also contributes a delayed signal to the observed light. The scattering of light therefore should be accounted for, both in simulations of air shower detection and reconstruction of observed events. In this work a Monte Carlo simulation of multiple scattering of light has been used to determine the contribution of the scattered light in observations of a point source of light. Results of the simulations and a parameterization of the angular distribution of the scattered light contribution to the observed signal (the point spread function) are presented. -- Author-Highlights: •Analysis of atmospheric scattering of light from an isotropic point source. •Different geometries and atmospheric conditions were investigated. •A parameterization of scattered light distribution has been developed. •The parameterization allows one to easily account for the light scattering in air. •The results will be useful in analyses of observations of extensive air shower

  7. An efficient cost function for the optimization of an n-layered isotropic cloaked cylinder

    International Nuclear Information System (INIS)

    Paul, Jason V; Collins, Peter J; Coutu, Ronald A Jr

    2013-01-01

    In this paper, we present an efficient cost function for optimizing n-layered isotropic cloaked cylinders. Cost function efficiency is achieved by extracting the expression for the angle independent scatterer contribution of an associated Green's function. Therefore, since this cost function is not a function of angle, accounting for every bistatic angle is not necessary and thus more efficient than other cost functions. With this general and efficient cost function, isotropic cloaked cylinders can be optimized for many layers and material parameters. To demonstrate this, optimized cloaked cylinders made of 10, 20 and 30 equal thickness layers are presented for TE and TM incidence. Furthermore, we study the effect layer thickness has on optimized cloaks by optimizing a 10 layer cloaked cylinder over the material parameters and individual layer thicknesses. The optimized material parameters in this effort do not exhibit the dual nature that is evident in the ideal transformation optics design. This indicates that the inevitable field penetration and subsequent PEC boundary condition at the cylinder must be taken into account for an optimal cloaked cylinder design. Furthermore, a more effective cloaked cylinder can be designed by optimizing both layer thickness and material parameters than by additional layers alone. (paper)

  8. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří

    2016-02-12

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  9. Convergence of barycentric coordinates to barycentric kernels

    KAUST Repository

    Kosinka, Jiří ; Barton, Michael

    2016-01-01

    We investigate the close correspondence between barycentric coordinates and barycentric kernels from the point of view of the limit process when finer and finer polygons converge to a smooth convex domain. We show that any barycentric kernel is the limit of a set of barycentric coordinates and prove that the convergence rate is quadratic. Our convergence analysis extends naturally to barycentric interpolants and mappings induced by barycentric coordinates and kernels. We verify our theoretical convergence results numerically on several examples.

  10. Hadamard Kernel SVM with applications for breast cancer outcome predictions.

    Science.gov (United States)

    Jiang, Hao; Ching, Wai-Ki; Cheung, Wai-Shun; Hou, Wenpin; Yin, Hong

    2017-12-21

    Breast cancer is one of the leading causes of deaths for women. It is of great necessity to develop effective methods for breast cancer detection and diagnosis. Recent studies have focused on gene-based signatures for outcome predictions. Kernel SVM for its discriminative power in dealing with small sample pattern recognition problems has attracted a lot attention. But how to select or construct an appropriate kernel for a specified problem still needs further investigation. Here we propose a novel kernel (Hadamard Kernel) in conjunction with Support Vector Machines (SVMs) to address the problem of breast cancer outcome prediction using gene expression data. Hadamard Kernel outperform the classical kernels and correlation kernel in terms of Area under the ROC Curve (AUC) values where a number of real-world data sets are adopted to test the performance of different methods. Hadamard Kernel SVM is effective for breast cancer predictions, either in terms of prognosis or diagnosis. It may benefit patients by guiding therapeutic options. Apart from that, it would be a valuable addition to the current SVM kernel families. We hope it will contribute to the wider biology and related communities.

  11. Aflatoxin contamination of developing corn kernels.

    Science.gov (United States)

    Amer, M A

    2005-01-01

    Preharvest of corn and its contamination with aflatoxin is a serious problem. Some environmental and cultural factors responsible for infection and subsequent aflatoxin production were investigated in this study. Stage of growth and location of kernels on corn ears were found to be one of the important factors in the process of kernel infection with A. flavus & A. parasiticus. The results showed positive correlation between the stage of growth and kernel infection. Treatment of corn with aflatoxin reduced germination, protein and total nitrogen contents. Total and reducing soluble sugar was increase in corn kernels as response to infection. Sucrose and protein content were reduced in case of both pathogens. Shoot system length, seeding fresh weigh and seedling dry weigh was also affected. Both pathogens induced reduction of starch content. Healthy corn seedlings treated with aflatoxin solution were badly affected. Their leaves became yellow then, turned brown with further incubation. Moreover, their total chlorophyll and protein contents showed pronounced decrease. On the other hand, total phenolic compounds were increased. Histopathological studies indicated that A. flavus & A. parasiticus could colonize corn silks and invade developing kernels. Germination of A. flavus spores was occurred and hyphae spread rapidly across the silk, producing extensive growth and lateral branching. Conidiophores and conidia had formed in and on the corn silk. Temperature and relative humidity greatly influenced the growth of A. flavus & A. parasiticus and aflatoxin production.

  12. Advanced sources and optical components for the McStas neutron scattering instrument simulation package

    DEFF Research Database (Denmark)

    Farhi, E.; Monzat, C.; Arnerin, R.

    2014-01-01

    -up, including lenses and prisms. A new library for McStas adds the ability to describe any geometrical arrangement as a set of polygons. This feature has been implemented in most sample scattering components such as Single_crystal, Incoherent, Isotropic_Sqw (liquids/amorphous/powder), PowderN as well...

  13. Anisotropic kernel p(μ → μ') for transport calculations of elastically scattered neutrons

    International Nuclear Information System (INIS)

    Stevenson, B.

    1985-01-01

    Literature in the area of anisotropic neutron scattering is by no means lacking. Attention, however, is usually devoted to solution of some particular neutron transport problem and the model employed is at best approximate. The present approach to the problem in general is classically exact and may be of some particular value to individuals seeking exact numerical results in transport calculations. For attempts neutrons originally directed toward the unit vector Omega, it attempts the evaluation of p(theta'), defined such that p(theta') d theta' is that fraction of scattered neutrons that emerges in the vicinity of a cone i.e., having been scattered to between angles theta' and theta' + d theta' with the axis of preferred orientation i; Omega makes an angle theta with i. The relative simplicity of the final form of the solution for hydrogen, in spite of the complicated nature of the limits involved, is a trade-off that truly is not necessary. The exact general solution presented here in integral form, has exceedingly simple limits, i.e., 0 ≤ theta' ≤ π regardless of the material involved; but the form of the final solution is extraordinarily complicated

  14. Kernel Korner : The Linux keyboard driver

    NARCIS (Netherlands)

    Brouwer, A.E.

    1995-01-01

    Our Kernel Korner series continues with an article describing the Linux keyboard driver. This article is not for "Kernel Hackers" only--in fact, it will be most useful to those who wish to use their own keyboard to its fullest potential, and those who want to write programs to take advantage of the

  15. The heating of UO_2 kernels in argon gas medium on the physical properties of sintered UO_2 kernels

    International Nuclear Information System (INIS)

    Damunir; Sri Rinanti Susilowati; Ariyani Kusuma Dewi

    2015-01-01

    The heating of UO_2 kernels in argon gas medium on the physical properties of sinter UO_2 kernels was conducted. The heated of the UO_2 kernels was conducted in a sinter reactor of a bed type. The sample used was the UO_2 kernels resulted from the reduction results at 800 °C temperature for 3 hours that had the density of 8.13 g/cm"3; porosity of 0.26; O/U ratio of 2.05; diameter of 1146 μm and sphericity of 1.05. The sample was put into a sinter reactor, then it was vacuumed by flowing the argon gas at 180 mmHg pressure to drain the air from the reactor. After that, the cooling water and argon gas were continuously flowed with the pressure of 5 mPa with 1.5 liter/minutes velocity. The reactor temperature was increased and variated at 1200-1500 °C temperature and for 1-4 hours. The sinters UO_2 kernels resulted from the study were analyzed in term of their physical properties including the density, porosity, diameter, sphericity, and specific surface area. The density was analyzed using pycnometer with CCl_4 solution. The porosity was determined using Haynes equation. The diameters and sphericity were showed using the Dino-lite microscope. The specific surface area was determined using surface area meter Nova-1000. The obtained products showed the the heating of UO_2 kernel in argon gas medium were influenced on the physical properties of sinters UO_2 kernel. The condition of best relatively at 1400 °C temperature and 2 hours time. The product resulted from the study was relatively at its best when heating was conducted at 1400 °C temperature and 2 hours time, produced sinters UO_2 kernel with density of 10.14 gr/ml; porosity of 7 %; diameters of 893 μm; sphericity of 1.07 and specific surface area of 4.68 m"2/g with solidify shrinkage of 22 %. (author)

  16. Contact mechanics and friction for transversely isotropic viscoelastic materials

    NARCIS (Netherlands)

    Mokhtari, Milad; Schipper, Dirk J.; Vleugels, N.; Noordermeer, Jacobus W.M.; Yoshimoto, S.; Hashimoto, H.

    2015-01-01

    Transversely isotropic materials are an unique group of materials whose properties are the same along two of the principal axes of a Cartesian coordinate system. Various natural and artificial materials behave effectively as transversely isotropic elastic solids. Several materials can be classified

  17. Realized kernels in practice

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole Eiler; Hansen, P. Reinhard; Lunde, Asger

    2009-01-01

    and find a remarkable level of agreement. We identify some features of the high-frequency data, which are challenging for realized kernels. They are when there are local trends in the data, over periods of around 10 minutes, where the prices and quotes are driven up or down. These can be associated......Realized kernels use high-frequency data to estimate daily volatility of individual stock prices. They can be applied to either trade or quote data. Here we provide the details of how we suggest implementing them in practice. We compare the estimates based on trade and quote data for the same stock...

  18. Anatomically-aided PET reconstruction using the kernel method.

    Science.gov (United States)

    Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi

    2016-09-21

    This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.

  19. Embedded real-time operating system micro kernel design

    Science.gov (United States)

    Cheng, Xiao-hui; Li, Ming-qiang; Wang, Xin-zheng

    2005-12-01

    Embedded systems usually require a real-time character. Base on an 8051 microcontroller, an embedded real-time operating system micro kernel is proposed consisting of six parts, including a critical section process, task scheduling, interruption handle, semaphore and message mailbox communication, clock managent and memory managent. Distributed CPU and other resources are among tasks rationally according to the importance and urgency. The design proposed here provides the position, definition, function and principle of micro kernel. The kernel runs on the platform of an ATMEL AT89C51 microcontroller. Simulation results prove that the designed micro kernel is stable and reliable and has quick response while operating in an application system.

  20. Kernel Temporal Differences for Neural Decoding

    Science.gov (United States)

    Bae, Jihye; Sanchez Giraldo, Luis G.; Pohlmeyer, Eric A.; Francis, Joseph T.; Sanchez, Justin C.; Príncipe, José C.

    2015-01-01

    We study the feasibility and capability of the kernel temporal difference (KTD)(λ) algorithm for neural decoding. KTD(λ) is an online, kernel-based learning algorithm, which has been introduced to estimate value functions in reinforcement learning. This algorithm combines kernel-based representations with the temporal difference approach to learning. One of our key observations is that by using strictly positive definite kernels, algorithm's convergence can be guaranteed for policy evaluation. The algorithm's nonlinear functional approximation capabilities are shown in both simulations of policy evaluation and neural decoding problems (policy improvement). KTD can handle high-dimensional neural states containing spatial-temporal information at a reasonable computational complexity allowing real-time applications. When the algorithm seeks a proper mapping between a monkey's neural states and desired positions of a computer cursor or a robot arm, in both open-loop and closed-loop experiments, it can effectively learn the neural state to action mapping. Finally, a visualization of the coadaptation process between the decoder and the subject shows the algorithm's capabilities in reinforcement learning brain machine interfaces. PMID:25866504

  1. Transfer by anisotropic scattering between subsets of the unit sphere of directions in linear transport theory

    International Nuclear Information System (INIS)

    Trombetti, T.

    1990-01-01

    The exact kernel method is presented for linear transport problems with azimuth-dependent angular fluxes. It is based on the evaluation of average scattering densities (ASD's) that fully describe the neutron (or particle) transfer between subsets of the unit sphere of directions by anisotropic scattering. Reciprocity and other ASD functional properties are proved and combined with the symmetry properties of suitable SN quadrature sets. This greatly reduces the number of independent ASD's to be computed and stored. An approach for performing ASD computations with reciprocity checks is presented. ASD expressions of the scattering source for typical 2D geometries are explicitly given. (author)

  2. Classification of maize kernels using NIR hyperspectral imaging

    DEFF Research Database (Denmark)

    Williams, Paul; Kucheryavskiy, Sergey V.

    2016-01-01

    NIR hyperspectral imaging was evaluated to classify maize kernels of three hardness categories: hard, medium and soft. Two approaches, pixel-wise and object-wise, were investigated to group kernels according to hardness. The pixel-wise classification assigned a class to every pixel from individual...... and specificity of 0.95 and 0.93). Both feature extraction methods can be recommended for classification of maize kernels on production scale....

  3. Influence of wheat kernel physical properties on the pulverizing process.

    Science.gov (United States)

    Dziki, Dariusz; Cacak-Pietrzak, Grażyna; Miś, Antoni; Jończyk, Krzysztof; Gawlik-Dziki, Urszula

    2014-10-01

    The physical properties of wheat kernel were determined and related to pulverizing performance by correlation analysis. Nineteen samples of wheat cultivars about similar level of protein content (11.2-12.8 % w.b.) and obtained from organic farming system were used for analysis. The kernel (moisture content 10 % w.b.) was pulverized by using the laboratory hammer mill equipped with round holes 1.0 mm screen. The specific grinding energy ranged from 120 kJkg(-1) to 159 kJkg(-1). On the basis of data obtained many of significant correlations (p kernel physical properties and pulverizing process of wheat kernel, especially wheat kernel hardness index (obtained on the basis of Single Kernel Characterization System) and vitreousness significantly and positively correlated with the grinding energy indices and the mass fraction of coarse particles (> 0.5 mm). Among the kernel mechanical properties determined on the basis of uniaxial compression test only the rapture force was correlated with the impact grinding results. The results showed also positive and significant relationships between kernel ash content and grinding energy requirements. On the basis of wheat physical properties the multiple linear regression was proposed for predicting the average particle size of pulverized kernel.

  4. Evolution kernel for the Dirac field

    International Nuclear Information System (INIS)

    Baaquie, B.E.

    1982-06-01

    The evolution kernel for the free Dirac field is calculated using the Wilson lattice fermions. We discuss the difficulties due to which this calculation has not been previously performed in the continuum theory. The continuum limit is taken, and the complete energy eigenfunctions as well as the propagator are then evaluated in a new manner using the kernel. (author)

  5. Gradient-based adaptation of general gaussian kernels.

    Science.gov (United States)

    Glasmachers, Tobias; Igel, Christian

    2005-10-01

    Gradient-based optimizing of gaussian kernel functions is considered. The gradient for the adaptation of scaling and rotation of the input space is computed to achieve invariance against linear transformations. This is done by using the exponential map as a parameterization of the kernel parameter manifold. By restricting the optimization to a constant trace subspace, the kernel size can be controlled. This is, for example, useful to prevent overfitting when minimizing radius-margin generalization performance measures. The concepts are demonstrated by training hard margin support vector machines on toy data.

  6. Analog forecasting with dynamics-adapted kernels

    Science.gov (United States)

    Zhao, Zhizhen; Giannakis, Dimitrios

    2016-09-01

    Analog forecasting is a nonparametric technique introduced by Lorenz in 1969 which predicts the evolution of states of a dynamical system (or observables defined on the states) by following the evolution of the sample in a historical record of observations which most closely resembles the current initial data. Here, we introduce a suite of forecasting methods which improve traditional analog forecasting by combining ideas from kernel methods developed in harmonic analysis and machine learning and state-space reconstruction for dynamical systems. A key ingredient of our approach is to replace single-analog forecasting with weighted ensembles of analogs constructed using local similarity kernels. The kernels used here employ a number of dynamics-dependent features designed to improve forecast skill, including Takens’ delay-coordinate maps (to recover information in the initial data lost through partial observations) and a directional dependence on the dynamical vector field generating the data. Mathematically, our approach is closely related to kernel methods for out-of-sample extension of functions, and we discuss alternative strategies based on the Nyström method and the multiscale Laplacian pyramids technique. We illustrate these techniques in applications to forecasting in a low-order deterministic model for atmospheric dynamics with chaotic metastability, and interannual-scale forecasting in the North Pacific sector of a comprehensive climate model. We find that forecasts based on kernel-weighted ensembles have significantly higher skill than the conventional approach following a single analog.

  7. A Monte-Carlo simulation of the behaviour of electron swarms in hydrogen using an anisotropic scattering model

    International Nuclear Information System (INIS)

    Blevin, H.A.; Fletcher, J.; Hunter, S.R.

    1978-05-01

    In a recent paper, a Monte-Carlo simulation of electron swarms in hydrogen using an isotropic scattering model was reported. In this previous work discrepancies between the predicted and measured electron transport parameters were observed. In this paper a far more realistic anisotropic scattering model is used. Good agreement between predicted and experimental data is observed and the simulation code has been used to calculate various parameters which are not directly measurable

  8. Manipulation of surface plasmon polariton propagation on isotropic and anisotropic two-dimensional materials coupled to boron nitride heterostructures

    Energy Technology Data Exchange (ETDEWEB)

    Inampudi, Sandeep; Nazari, Mina; Forouzmand, Ali; Mosallaei, Hossein, E-mail: hosseinm@coe.neu.edu [Department of Electrical and Computer Engineering, Northeastern University, 360 Huntington Ave., Boston, Massachusetts 02115 (United States)

    2016-01-14

    We present a comprehensive analysis of surface plasmon polariton dispersion characteristics associated with isotropic and anisotropic two-dimensional atomically thin layered materials (2D sheets) coupled to h-BN heterostructures. A scattering matrix based approach is presented to compute the electromagnetic fields and related dispersion characteristics of stacked layered systems composed of anisotropic 2D sheets and uniaxial bulk materials. We analyze specifically the surface plasmon polariton (SPP) dispersion characteristics in case of isolated and coupled two-dimensional layers with isotropic and anisotropic conductivities. An analysis based on residue theorem is utilized to identify optimum optical parameters (surface conductivity) and geometrical parameters (separation between layers) to maximize the SPP field at a given position. The effect of type and degree of anisotropy on the shapes of iso-frequency curves and propagation characteristics is discussed in detail. The analysis presented in this paper gives an insight to identify optimum setup to enhance the SPP field at a given position and in a given direction on the surface of two-dimensional materials.

  9. Open Problem: Kernel methods on manifolds and metric spaces

    DEFF Research Database (Denmark)

    Feragen, Aasa; Hauberg, Søren

    2016-01-01

    Radial kernels are well-suited for machine learning over general geodesic metric spaces, where pairwise distances are often the only computable quantity available. We have recently shown that geodesic exponential kernels are only positive definite for all bandwidths when the input space has strong...... linear properties. This negative result hints that radial kernel are perhaps not suitable over geodesic metric spaces after all. Here, however, we present evidence that large intervals of bandwidths exist where geodesic exponential kernels have high probability of being positive definite over finite...... datasets, while still having significant predictive power. From this we formulate conjectures on the probability of a positive definite kernel matrix for a finite random sample, depending on the geometry of the data space and the spread of the sample....

  10. Genetic dissection of the maize kernel development process via conditional QTL mapping for three developing kernel-related traits in an immortalized F2 population.

    Science.gov (United States)

    Zhang, Zhanhui; Wu, Xiangyuan; Shi, Chaonan; Wang, Rongna; Li, Shengfei; Wang, Zhaohui; Liu, Zonghua; Xue, Yadong; Tang, Guiliang; Tang, Jihua

    2016-02-01

    Kernel development is an important dynamic trait that determines the final grain yield in maize. To dissect the genetic basis of maize kernel development process, a conditional quantitative trait locus (QTL) analysis was conducted using an immortalized F2 (IF2) population comprising 243 single crosses at two locations over 2 years. Volume (KV) and density (KD) of dried developing kernels, together with kernel weight (KW) at different developmental stages, were used to describe dynamic changes during kernel development. Phenotypic analysis revealed that final KW and KD were determined at DAP22 and KV at DAP29. Unconditional QTL mapping for KW, KV and KD uncovered 97 QTLs at different kernel development stages, of which qKW6b, qKW7a, qKW7b, qKW10b, qKW10c, qKV10a, qKV10b and qKV7 were identified under multiple kernel developmental stages and environments. Among the 26 QTLs detected by conditional QTL mapping, conqKW7a, conqKV7a, conqKV10a, conqKD2, conqKD7 and conqKD8a were conserved between the two mapping methodologies. Furthermore, most of these QTLs were consistent with QTLs and genes for kernel development/grain filling reported in previous studies. These QTLs probably contain major genes associated with the kernel development process, and can be used to improve grain yield and quality through marker-assisted selection.

  11. Kernel-based noise filtering of neutron detector signals

    International Nuclear Information System (INIS)

    Park, Moon Ghu; Shin, Ho Cheol; Lee, Eun Ki

    2007-01-01

    This paper describes recently developed techniques for effective filtering of neutron detector signal noise. In this paper, three kinds of noise filters are proposed and their performance is demonstrated for the estimation of reactivity. The tested filters are based on the unilateral kernel filter, unilateral kernel filter with adaptive bandwidth and bilateral filter to show their effectiveness in edge preservation. Filtering performance is compared with conventional low-pass and wavelet filters. The bilateral filter shows a remarkable improvement compared with unilateral kernel and wavelet filters. The effectiveness and simplicity of the unilateral kernel filter with adaptive bandwidth is also demonstrated by applying it to the reactivity measurement performed during reactor start-up physics tests

  12. A trace ratio maximization approach to multiple kernel-based dimensionality reduction.

    Science.gov (United States)

    Jiang, Wenhao; Chung, Fu-lai

    2014-01-01

    Most dimensionality reduction techniques are based on one metric or one kernel, hence it is necessary to select an appropriate kernel for kernel-based dimensionality reduction. Multiple kernel learning for dimensionality reduction (MKL-DR) has been recently proposed to learn a kernel from a set of base kernels which are seen as different descriptions of data. As MKL-DR does not involve regularization, it might be ill-posed under some conditions and consequently its applications are hindered. This paper proposes a multiple kernel learning framework for dimensionality reduction based on regularized trace ratio, termed as MKL-TR. Our method aims at learning a transformation into a space of lower dimension and a corresponding kernel from the given base kernels among which some may not be suitable for the given data. The solutions for the proposed framework can be found based on trace ratio maximization. The experimental results demonstrate its effectiveness in benchmark datasets, which include text, image and sound datasets, for supervised, unsupervised as well as semi-supervised settings. Copyright © 2013 Elsevier Ltd. All rights reserved.

  13. Predictive Model Equations for Palm Kernel (Elaeis guneensis J ...

    African Journals Online (AJOL)

    Estimated error of ± 0.18 and ± 0.2 are envisaged while applying the models for predicting palm kernel and sesame oil colours respectively. Keywords: Palm kernel, Sesame, Palm kernel, Oil Colour, Process Parameters, Model. Journal of Applied Science, Engineering and Technology Vol. 6 (1) 2006 pp. 34-38 ...

  14. Heat kernel analysis for Bessel operators on symmetric cones

    DEFF Research Database (Denmark)

    Möllers, Jan

    2014-01-01

    . The heat kernel is explicitly given in terms of a multivariable $I$-Bessel function on $Ω$. Its corresponding heat kernel transform defines a continuous linear operator between $L^p$-spaces. The unitary image of the $L^2$-space under the heat kernel transform is characterized as a weighted Bergmann space...

  15. A multi-scale kernel bundle for LDDMM

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Nielsen, Mads; Lauze, Francois Bernard

    2011-01-01

    The Large Deformation Diffeomorphic Metric Mapping framework constitutes a widely used and mathematically well-founded setup for registration in medical imaging. At its heart lies the notion of the regularization kernel, and the choice of kernel greatly affects the results of registrations...

  16. Calculation of the thermal utilization factor in a heterogeneous slab cell scattering neutrons anisotropically

    Energy Technology Data Exchange (ETDEWEB)

    Abdallah, A M; Elsherbiny, E M; Sobhy, M [Reactor departement, nuclear research centre, Inshaas, (Egypt)

    1995-10-01

    The P{sub n}-spatial expansion method has been used for calculating the one speed transport utilization factor in heterogenous slab cells in which neutrons may scatter anisotropically; by considering the P{sup 1-} approximation with a two-term scattering kernel in both the fuel and moderator regions, an analytical expression for the disadvantage factor has been derived. The numerical results obtained have been shown to be much better than those calculated by the usual P{sup 1-} and P{sup 3-} approximations and comparable with those obtained by some exact methods. 3 tabs.

  17. Magnetic Dimer Excitations in Cs3Cr2CI9 Studied by Neutron Scattering

    DEFF Research Database (Denmark)

    Leuenberger, Bruno; Güdel, Hans U.; Kjems, Jørgen

    1985-01-01

    The energy dispersion of the singlet-triplet dimer excitation in Cs3Cr2CI9h as been studied by inelastic neutron scattering (INS) at temperatures down to 1.3 K. The results can be accounted for by using a completely isotropic Heisenberg Hamiltonian in the random phase approximation (RPA). Only...

  18. Training Lp norm multiple kernel learning in the primal.

    Science.gov (United States)

    Liang, Zhizheng; Xia, Shixiong; Zhou, Yong; Zhang, Lei

    2013-10-01

    Some multiple kernel learning (MKL) models are usually solved by utilizing the alternating optimization method where one alternately solves SVMs in the dual and updates kernel weights. Since the dual and primal optimization can achieve the same aim, it is valuable in exploring how to perform Lp norm MKL in the primal. In this paper, we propose an Lp norm multiple kernel learning algorithm in the primal where we resort to the alternating optimization method: one cycle for solving SVMs in the primal by using the preconditioned conjugate gradient method and other cycle for learning the kernel weights. It is interesting to note that the kernel weights in our method can obtain analytical solutions. Most importantly, the proposed method is well suited for the manifold regularization framework in the primal since solving LapSVMs in the primal is much more effective than solving LapSVMs in the dual. In addition, we also carry out theoretical analysis for multiple kernel learning in the primal in terms of the empirical Rademacher complexity. It is found that optimizing the empirical Rademacher complexity may obtain a type of kernel weights. The experiments on some datasets are carried out to demonstrate the feasibility and effectiveness of the proposed method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Coupling individual kernel-filling processes with source-sink interactions into GREENLAB-Maize.

    Science.gov (United States)

    Ma, Yuntao; Chen, Youjia; Zhu, Jinyu; Meng, Lei; Guo, Yan; Li, Baoguo; Hoogenboom, Gerrit

    2018-02-13

    Failure to account for the variation of kernel growth in a cereal crop simulation model may cause serious deviations in the estimates of crop yield. The goal of this research was to revise the GREENLAB-Maize model to incorporate source- and sink-limited allocation approaches to simulate the dry matter accumulation of individual kernels of an ear (GREENLAB-Maize-Kernel). The model used potential individual kernel growth rates to characterize the individual potential sink demand. The remobilization of non-structural carbohydrates from reserve organs to kernels was also incorporated. Two years of field experiments were conducted to determine the model parameter values and to evaluate the model using two maize hybrids with different plant densities and pollination treatments. Detailed observations were made on the dimensions and dry weights of individual kernels and other above-ground plant organs throughout the seasons. Three basic traits characterizing an individual kernel were compared on simulated and measured individual kernels: (1) final kernel size; (2) kernel growth rate; and (3) duration of kernel filling. Simulations of individual kernel growth closely corresponded to experimental data. The model was able to reproduce the observed dry weight of plant organs well. Then, the source-sink dynamics and the remobilization of carbohydrates for kernel growth were quantified to show that remobilization processes accompanied source-sink dynamics during the kernel-filling process. We conclude that the model may be used to explore options for optimizing plant kernel yield by matching maize management to the environment, taking into account responses at the level of individual kernels. © The Author(s) 2018. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  20. Scattering integral equations and four nucleon problem

    International Nuclear Information System (INIS)

    Narodetskii, I.M.

    1980-01-01

    Existing results from the application of integral equation technique to the four-nucleon bound states and scattering are reviewed. The first numerical calculations of the four-body integral equations have been done ten years ago. Yet, it is still widely believed that these equations are too complicated to solve numerically. The purpose of this review is to provide a clear and elementary introduction in the integral equation method and to demonstrate its usefulness in physical applications. The presentation is based on the quasiparticle approach. This permits a simple interpretation of the equations in terms of quasiparticle scattering. The mathematical basis for the quasiparticle approach is the Hilbert-Schmidt method of the Fredholm integral equation theory. The first part of this review contains a detailed discussion of the Hilbert-Schmidt expansion as applied to the 2-particle amplitudes and to the kernel of the four-body equations. The second part contains the discussion of the four-body quasiparticle equations and of the resed forullts obtain bound states and scattering

  1. Stochastic subset selection for learning with kernel machines.

    Science.gov (United States)

    Rhinelander, Jason; Liu, Xiaoping P

    2012-06-01

    Kernel machines have gained much popularity in applications of machine learning. Support vector machines (SVMs) are a subset of kernel machines and generalize well for classification, regression, and anomaly detection tasks. The training procedure for traditional SVMs involves solving a quadratic programming (QP) problem. The QP problem scales super linearly in computational effort with the number of training samples and is often used for the offline batch processing of data. Kernel machines operate by retaining a subset of observed data during training. The data vectors contained within this subset are referred to as support vectors (SVs). The work presented in this paper introduces a subset selection method for the use of kernel machines in online, changing environments. Our algorithm works by using a stochastic indexing technique when selecting a subset of SVs when computing the kernel expansion. The work described here is novel because it separates the selection of kernel basis functions from the training algorithm used. The subset selection algorithm presented here can be used in conjunction with any online training technique. It is important for online kernel machines to be computationally efficient due to the real-time requirements of online environments. Our algorithm is an important contribution because it scales linearly with the number of training samples and is compatible with current training techniques. Our algorithm outperforms standard techniques in terms of computational efficiency and provides increased recognition accuracy in our experiments. We provide results from experiments using both simulated and real-world data sets to verify our algorithm.

  2. RTOS kernel in portable electrocardiograph

    Science.gov (United States)

    Centeno, C. A.; Voos, J. A.; Riva, G. G.; Zerbini, C.; Gonzalez, E. A.

    2011-12-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  3. RTOS kernel in portable electrocardiograph

    International Nuclear Information System (INIS)

    Centeno, C A; Voos, J A; Riva, G G; Zerbini, C; Gonzalez, E A

    2011-01-01

    This paper presents the use of a Real Time Operating System (RTOS) on a portable electrocardiograph based on a microcontroller platform. All medical device digital functions are performed by the microcontroller. The electrocardiograph CPU is based on the 18F4550 microcontroller, in which an uCOS-II RTOS can be embedded. The decision associated with the kernel use is based on its benefits, the license for educational use and its intrinsic time control and peripherals management. The feasibility of its use on the electrocardiograph is evaluated based on the minimum memory requirements due to the kernel structure. The kernel's own tools were used for time estimation and evaluation of resources used by each process. After this feasibility analysis, the migration from cyclic code to a structure based on separate processes or tasks able to synchronize events is used; resulting in an electrocardiograph running on one Central Processing Unit (CPU) based on RTOS.

  4. RKRD: Runtime Kernel Rootkit Detection

    Science.gov (United States)

    Grover, Satyajit; Khosravi, Hormuzd; Kolar, Divya; Moffat, Samuel; Kounavis, Michael E.

    In this paper we address the problem of protecting computer systems against stealth malware. The problem is important because the number of known types of stealth malware increases exponentially. Existing approaches have some advantages for ensuring system integrity but sophisticated techniques utilized by stealthy malware can thwart them. We propose Runtime Kernel Rootkit Detection (RKRD), a hardware-based, event-driven, secure and inclusionary approach to kernel integrity that addresses some of the limitations of the state of the art. Our solution is based on the principles of using virtualization hardware for isolation, verifying signatures coming from trusted code as opposed to malware for scalability and performing system checks driven by events. Our RKRD implementation is guided by our goals of strong isolation, no modifications to target guest OS kernels, easy deployment, minimal infra-structure impact, and minimal performance overhead. We developed a system prototype and conducted a number of experiments which show that the per-formance impact of our solution is negligible.

  5. Denoising by semi-supervised kernel PCA preimaging

    DEFF Research Database (Denmark)

    Hansen, Toke Jansen; Abrahamsen, Trine Julie; Hansen, Lars Kai

    2014-01-01

    Kernel Principal Component Analysis (PCA) has proven a powerful tool for nonlinear feature extraction, and is often applied as a pre-processing step for classification algorithms. In denoising applications Kernel PCA provides the basis for dimensionality reduction, prior to the so-called pre-imag...

  6. Sentiment classification with interpolated information diffusion kernels

    NARCIS (Netherlands)

    Raaijmakers, S.

    2007-01-01

    Information diffusion kernels - similarity metrics in non-Euclidean information spaces - have been found to produce state of the art results for document classification. In this paper, we present a novel approach to global sentiment classification using these kernels. We carry out a large array of

  7. Change in physical properties of high density isotropic graphites irradiated in the ?JOYO? fast reactor

    Science.gov (United States)

    Maruyama, T.; Kaito, T.; Onose, S.; Shibahara, I.

    1995-08-01

    Thirteen kinds of isotropic graphites with different density and maximum grain size were irradiated in the experimental fast reactor "JOYO" to fluences from 2.11 to 2.86 × 10 26 n/m 2 ( E > 0.1 MeV) at temperatures from 549 to 597°C. Postirradiation examination was carried out on the dimensional changes, elastic modulus, and thermal conductivity of these materials. Dimensional change results indicate that the graphites irradiated at lower fluences showed shrinkage upon neutron irradiation followed by increase with increasing neutron fluences, irrespective of differences in material parameters. The Young's modulus and Poisson's ratio increased by two to three times the unirradiated values. The large scatter found in Poisson's ratio of unirradiated materials became very small and a linear dependence on density was obtained after irradiation. The thermal conductivity decreased to one-fifth to one-tenth of unirradiated values, with a negligible change in specific heat. The results of postirradiation examination indicated that the changes in physical properties of high density, isotropic graphites were mainly dominated by the irradiation condition rather than their material parameters. Namely, the effects of irradiation induced defects on physical properties of heavily neutron-irradiated graphites are much larger than that of defects associated with as-fabricated specimens.

  8. Change in physical properties of high density isotropic graphites irradiated in the ''JOYO'' fast reactor

    International Nuclear Information System (INIS)

    Maruyama, T.; Kaito, T.; Onose, S.; Shibahara, I.

    1995-01-01

    Thirteen kinds of isotropic graphites with different density and maximum grain size were irradiated in the experimental fast reactor ''JOYO'' to fluences from 2.11 to 2.86x10 26 n/m 2 (E>0.1 MeV) at temperatures from 549 to 597 C. Postirradiation examination was carried out on the dimensional changes, elastic modulus, and thermal conductivity of these materials. Dimensional change results indicate that the graphites irradiated at lower fluences showed shrinkage upon neutron irradiation followed by increase with increasing neutron fluences, irrespective of differences in material parameters. The Young's modulus and Poisson's ratio increased by two to three times the unirradiated values. The large scatter found in Poisson's ratio of unirradiated materials became very small and a linear dependence on density was obtained after irradiation. The thermal conductivity decreased to one-fifth to one-tenth of unirradiated values, with a negligible change in specific heat. The results of postirradiation examination indicated that the changes in physical properties of high density, isotropic graphites were mainly dominated by the irradiation condition rather than their material parameters. Namely, the effects of irradiation induced defects on physical properties of heavily neutron-irradiated graphites are much larger than that of defects associated with as-fabricated specimens. (orig.)

  9. SCATLAW: a code of scattering law and cross sections calculation for liquids and solids

    International Nuclear Information System (INIS)

    Padureanu, I.; Rapeanu, S.; Rotarascu, G.; Craciun, C.

    1978-11-01

    A code for calculation of the scattering law S(Q,ω), differential and double differential cross sections and scattering kernels in the energy range E(0 - 683 meV) and wave-vector transfer Q(0 - 40 A -1 ) is presented. The code can be used both for solids and liquids which are coherent or incoherent scatterer. For liquids the calculations are based on the most recent theoretical models involving the correlation functions and generalized field approach. The phonon expansion model and the free gas model are also analysed in term of frequency spectra obtained from inelastic neutron scattering using time-of-flight technique. Several results on liquid sodium at T = 233 deg C and on liquid bismuth at T = 286 deg C and T = 402 deg C are presented. (author)

  10. Common spatial pattern combined with kernel linear discriminate and generalized radial basis function for motor imagery-based brain computer interface applications

    Science.gov (United States)

    Hekmatmanesh, Amin; Jamaloo, Fatemeh; Wu, Huapeng; Handroos, Heikki; Kilpeläinen, Asko

    2018-04-01

    Brain Computer Interface (BCI) can be a challenge for developing of robotic, prosthesis and human-controlled systems. This work focuses on the implementation of a common spatial pattern (CSP) base algorithm to detect event related desynchronization patterns. Utilizing famous previous work in this area, features are extracted by filter bank with common spatial pattern (FBCSP) method, and then weighted by a sensitive learning vector quantization (SLVQ) algorithm. In the current work, application of the radial basis function (RBF) as a mapping kernel of linear discriminant analysis (KLDA) method on the weighted features, allows the transfer of data into a higher dimension for more discriminated data scattering by RBF kernel. Afterwards, support vector machine (SVM) with generalized radial basis function (GRBF) kernel is employed to improve the efficiency and robustness of the classification. Averagely, 89.60% accuracy and 74.19% robustness are achieved. BCI Competition III, Iva data set is used to evaluate the algorithm for detecting right hand and foot imagery movement patterns. Results show that combination of KLDA with SVM-GRBF classifier makes 8.9% and 14.19% improvements in accuracy and robustness, respectively. For all the subjects, it is concluded that mapping the CSP features into a higher dimension by RBF and utilization GRBF as a kernel of SVM, improve the accuracy and reliability of the proposed method.

  11. Linear and kernel methods for multivariate change detection

    DEFF Research Database (Denmark)

    Canty, Morton J.; Nielsen, Allan Aasbjerg

    2012-01-01

    ), as well as maximum autocorrelation factor (MAF) and minimum noise fraction (MNF) analyses of IR-MAD images, both linear and kernel-based (nonlinear), may further enhance change signals relative to no-change background. IDL (Interactive Data Language) implementations of IR-MAD, automatic radiometric...... normalization, and kernel PCA/MAF/MNF transformations are presented that function as transparent and fully integrated extensions of the ENVI remote sensing image analysis environment. The train/test approach to kernel PCA is evaluated against a Hebbian learning procedure. Matlab code is also available...... that allows fast data exploration and experimentation with smaller datasets. New, multiresolution versions of IR-MAD that accelerate convergence and that further reduce no-change background noise are introduced. Computationally expensive matrix diagonalization and kernel image projections are programmed...

  12. The generalized PN synthetic acceleration method for linear transport problems with highly anisotropic scattering

    International Nuclear Information System (INIS)

    Khattab, K.M.

    1997-01-01

    The diffusion synthetic acceleration (DSA) method has been known to be an effective tool for accelerating the iterative solution of transport equations with isotropic or mildly anisotropic scattering. However, the DSA method is not effective for transport equations that have strongly anisotropic scattering. A generalization of the modified DSA (MDSA) method is proposed that converges (clock time) faster than the MDSA method. This method is developed, the results of a Fourier analysis that theoretically predicts its efficiency are described, and numerical results that verify the theoretical prediction are presented

  13. Panel data specifications in nonparametric kernel regression

    DEFF Research Database (Denmark)

    Czekaj, Tomasz Gerard; Henningsen, Arne

    parametric panel data estimators to analyse the production technology of Polish crop farms. The results of our nonparametric kernel regressions generally differ from the estimates of the parametric models but they only slightly depend on the choice of the kernel functions. Based on economic reasoning, we...

  14. Scuba: scalable kernel-based gene prioritization.

    Science.gov (United States)

    Zampieri, Guido; Tran, Dinh Van; Donini, Michele; Navarin, Nicolò; Aiolli, Fabio; Sperduti, Alessandro; Valle, Giorgio

    2018-01-25

    The uncovering of genes linked to human diseases is a pressing challenge in molecular biology and precision medicine. This task is often hindered by the large number of candidate genes and by the heterogeneity of the available information. Computational methods for the prioritization of candidate genes can help to cope with these problems. In particular, kernel-based methods are a powerful resource for the integration of heterogeneous biological knowledge, however, their practical implementation is often precluded by their limited scalability. We propose Scuba, a scalable kernel-based method for gene prioritization. It implements a novel multiple kernel learning approach, based on a semi-supervised perspective and on the optimization of the margin distribution. Scuba is optimized to cope with strongly unbalanced settings where known disease genes are few and large scale predictions are required. Importantly, it is able to efficiently deal both with a large amount of candidate genes and with an arbitrary number of data sources. As a direct consequence of scalability, Scuba integrates also a new efficient strategy to select optimal kernel parameters for each data source. We performed cross-validation experiments and simulated a realistic usage setting, showing that Scuba outperforms a wide range of state-of-the-art methods. Scuba achieves state-of-the-art performance and has enhanced scalability compared to existing kernel-based approaches for genomic data. This method can be useful to prioritize candidate genes, particularly when their number is large or when input data is highly heterogeneous. The code is freely available at https://github.com/gzampieri/Scuba .

  15. MULTITASKER, Multitasking Kernel for C and FORTRAN Under UNIX

    International Nuclear Information System (INIS)

    Brooks, E.D. III

    1988-01-01

    1 - Description of program or function: MULTITASKER implements a multitasking kernel for the C and FORTRAN programming languages that runs under UNIX. The kernel provides a multitasking environment which serves two purposes. The first is to provide an efficient portable environment for the development, debugging, and execution of production multiprocessor programs. The second is to provide a means of evaluating the performance of a multitasking program on model multiprocessor hardware. The performance evaluation features require no changes in the application program source and are implemented as a set of compile- and run-time options in the kernel. 2 - Method of solution: The FORTRAN interface to the kernel is identical in function to the CRI multitasking package provided for the Cray XMP. This provides a migration path to high speed (but small N) multiprocessors once the application has been coded and debugged. With use of the UNIX m4 macro preprocessor, source compatibility can be achieved between the UNIX code development system and the target Cray multiprocessor. The kernel also provides a means of evaluating a program's performance on model multiprocessors. Execution traces may be obtained which allow the user to determine kernel overhead, memory conflicts between various tasks, and the average concurrency being exploited. The kernel may also be made to switch tasks every cpu instruction with a random execution ordering. This allows the user to look for unprotected critical regions in the program. These features, implemented as a set of compile- and run-time options, cause extra execution overhead which is not present in the standard production version of the kernel

  16. Multiple kernel boosting framework based on information measure for classification

    International Nuclear Information System (INIS)

    Qi, Chengming; Wang, Yuping; Tian, Wenjie; Wang, Qun

    2016-01-01

    The performance of kernel-based method, such as support vector machine (SVM), is greatly affected by the choice of kernel function. Multiple kernel learning (MKL) is a promising family of machine learning algorithms and has attracted many attentions in recent years. MKL combines multiple sub-kernels to seek better results compared to single kernel learning. In order to improve the efficiency of SVM and MKL, in this paper, the Kullback–Leibler kernel function is derived to develop SVM. The proposed method employs an improved ensemble learning framework, named KLMKB, which applies Adaboost to learning multiple kernel-based classifier. In the experiment for hyperspectral remote sensing image classification, we employ feature selected through Optional Index Factor (OIF) to classify the satellite image. We extensively examine the performance of our approach in comparison to some relevant and state-of-the-art algorithms on a number of benchmark classification data sets and hyperspectral remote sensing image data set. Experimental results show that our method has a stable behavior and a noticeable accuracy for different data set.

  17. Kernel Methods for Mining Instance Data in Ontologies

    Science.gov (United States)

    Bloehdorn, Stephan; Sure, York

    The amount of ontologies and meta data available on the Web is constantly growing. The successful application of machine learning techniques for learning of ontologies from textual data, i.e. mining for the Semantic Web, contributes to this trend. However, no principal approaches exist so far for mining from the Semantic Web. We investigate how machine learning algorithms can be made amenable for directly taking advantage of the rich knowledge expressed in ontologies and associated instance data. Kernel methods have been successfully employed in various learning tasks and provide a clean framework for interfacing between non-vectorial data and machine learning algorithms. In this spirit, we express the problem of mining instances in ontologies as the problem of defining valid corresponding kernels. We present a principled framework for designing such kernels by means of decomposing the kernel computation into specialized kernels for selected characteristics of an ontology which can be flexibly assembled and tuned. Initial experiments on real world Semantic Web data enjoy promising results and show the usefulness of our approach.

  18. Visualization and computer graphics on isotropically emissive volumetric displays.

    Science.gov (United States)

    Mora, Benjamin; Maciejewski, Ross; Chen, Min; Ebert, David S

    2009-01-01

    The availability of commodity volumetric displays provides ordinary users with a new means of visualizing 3D data. Many of these displays are in the class of isotropically emissive light devices, which are designed to directly illuminate voxels in a 3D frame buffer, producing X-ray-like visualizations. While this technology can offer intuitive insight into a 3D object, the visualizations are perceptually different from what a computer graphics or visualization system would render on a 2D screen. This paper formalizes rendering on isotropically emissive displays and introduces a novel technique that emulates traditional rendering effects on isotropically emissive volumetric displays, delivering results that are much closer to what is traditionally rendered on regular 2D screens. Such a technique can significantly broaden the capability and usage of isotropically emissive volumetric displays. Our method takes a 3D dataset or object as the input, creates an intermediate light field, and outputs a special 3D volume dataset called a lumi-volume. This lumi-volume encodes approximated rendering effects in a form suitable for display with accumulative integrals along unobtrusive rays. When a lumi-volume is fed directly into an isotropically emissive volumetric display, it creates a 3D visualization with surface shading effects that are familiar to the users. The key to this technique is an algorithm for creating a 3D lumi-volume from a 4D light field. In this paper, we discuss a number of technical issues, including transparency effects due to the dimension reduction and sampling rates for light fields and lumi-volumes. We show the effectiveness and usability of this technique with a selection of experimental results captured from an isotropically emissive volumetric display, and we demonstrate its potential capability and scalability with computer-simulated high-resolution results.

  19. The integral first collision kernel method for gamma-ray skyshine analysis[Skyshine; Gamma-ray; First collision kernel; Monte Carlo calculation

    Energy Technology Data Exchange (ETDEWEB)

    Sheu, R.-D.; Chui, C.-S.; Jiang, S.-H. E-mail: shjiang@mx.nthu.edu.tw

    2003-12-01

    A simplified method, based on the integral of the first collision kernel, is presented for performing gamma-ray skyshine calculations for the collimated sources. The first collision kernels were calculated in air for a reference air density by use of the EGS4 Monte Carlo code. These kernels can be applied to other air densities by applying density corrections. The integral first collision kernel (IFCK) method has been used to calculate two of the ANSI/ANS skyshine benchmark problems and the results were compared with a number of other commonly used codes. Our results were generally in good agreement with others but only spend a small fraction of the computation time required by the Monte Carlo calculations. The scheme of the IFCK method for dealing with lots of source collimation geometry is also presented in this study.

  20. Nested structures approach in designing an isotropic negative-index material for infrared

    DEFF Research Database (Denmark)

    Andryieuski, Andrei; Malureanu, Radu; Lavrinenko, Andrei

    2009-01-01

    We propose a new generic approach for designing isotropic metamaterial with nested cubic structures. As an example, a three-dimensional isotropic unit cell design "Split Cube in Cage" (SCiC) is shown to exhibit an effective negative refractive index on infrared wavelengths. We report on the refra......We propose a new generic approach for designing isotropic metamaterial with nested cubic structures. As an example, a three-dimensional isotropic unit cell design "Split Cube in Cage" (SCiC) is shown to exhibit an effective negative refractive index on infrared wavelengths. We report...

  1. BrachyTPS -Interactive point kernel code package for brachytherapy treatment planning of gynaecological cancers

    International Nuclear Information System (INIS)

    Thilagam, L.; Subbaiah, K.V.

    2008-01-01

    Brachytherapy treatment planning systems (TPS) are always recommended to account for the effect of tissue, applicator and shielding material heterogeneities exist in Intracavitary brachytherapy (ICBT) applicators. Most of the commercially available brachytherapy TPS softwares estimate the absorbed dose at a point, only taking care of the contributions of individual sources and the source distribution, neglecting the dose perturbations arising from the applicator design and construction. So the doses estimated by them are not much accurate under realistic clinical conditions. In this regard, interactive point kernel rode (BrachyTPS) has been developed to perform independent dose calculations by taking into account the effect of these heterogeneities, using two regions build up factors, proposed by Kalos. As primary input data, the code takes patients' planning data including the source specifications, dwell positions, dwell times and it computes the doses at reference points by dose point kernel formalisms, with multi-layer shield build-up factors accounting for the contributions from scattered radiation. In addition to performing dose distribution calculations, this code package is capable of displaying an isodose distribution curve into the patient anatomy images. The primary aim of this study is to validate the developed point kernel code integrated with treatment planning systems against the other tools which are available in the market. In the present work, three brachytherapy applicators commonly used in the treatment of uterine cervical carcinoma, Board of Radiation Isotope and Technology (BRIT) made low dose rate (LDR) applicator, Fletcher Green type LDR applicator and Fletcher Williamson high dose rate (HDR) applicator were studied to test the accuracy of the software

  2. The revised geometric measure of entanglement for isotropic state

    International Nuclear Information System (INIS)

    Cao Ya

    2011-01-01

    Based on the revised geometric measure of entanglement (RGME), we obtain the analytical expression of isotropic state and generalize to n-particle and d-dimension mixed state case. Meantime, we obtain the relation about isotropic state E-tilde sin 2 (ρ) ≤ E re (ρ). The results indicate RGME is an appropriate measure of entanglement. (authors)

  3. A kernel adaptive algorithm for quaternion-valued inputs.

    Science.gov (United States)

    Paul, Thomas K; Ogunfunmi, Tokunbo

    2015-10-01

    The use of quaternion data can provide benefit in applications like robotics and image recognition, and particularly for performing transforms in 3-D space. Here, we describe a kernel adaptive algorithm for quaternions. A least mean square (LMS)-based method was used, resulting in the derivation of the quaternion kernel LMS (Quat-KLMS) algorithm. Deriving this algorithm required describing the idea of a quaternion reproducing kernel Hilbert space (RKHS), as well as kernel functions suitable with quaternions. A modified HR calculus for Hilbert spaces was used to find the gradient of cost functions defined on a quaternion RKHS. In addition, the use of widely linear (or augmented) filtering is proposed to improve performance. The benefit of the Quat-KLMS and widely linear forms in learning nonlinear transformations of quaternion data are illustrated with simulations.

  4. Improving the Bandwidth Selection in Kernel Equating

    Science.gov (United States)

    Andersson, Björn; von Davier, Alina A.

    2014-01-01

    We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…

  5. Collinear limits beyond the leading order from the scattering equations

    Energy Technology Data Exchange (ETDEWEB)

    Nandan, Dhritiman; Plefka, Jan; Wormsbecher, Wadim [Institut für Physik and IRIS Adlershof, Humboldt-Universität zu Berlin,Zum Großen Windkanal 6, D-12489 Berlin (Germany)

    2017-02-08

    The structure of tree-level scattering amplitudes for collinear massless bosons is studied beyond their leading splitting function behavior. These near-collinear limits at sub-leading order are best studied using the Cachazo-He-Yuan (CHY) formulation of the S-matrix based on the scattering equations. We compute the collinear limits for gluons, gravitons and scalars. It is shown that the CHY integrand for an n-particle gluon scattering amplitude in the collinear limit at sub-leading order is expressed as a convolution of an (n−1)-particle gluon integrand and a collinear kernel integrand, which is universal. Our representation is shown to obey recently proposed amplitude relations in which the collinear gluons of same helicity are replaced by a single graviton. Finally, we extend our analysis to effective field theories and study the collinear limit of the non-linear sigma model, Einstein-Maxwell-Scalar and Yang-Mills-Scalar theory.

  6. A new method by steering kernel-based Richardson–Lucy algorithm for neutron imaging restoration

    International Nuclear Information System (INIS)

    Qiao, Shuang; Wang, Qiao; Sun, Jia-ning; Huang, Ji-peng

    2014-01-01

    Motivated by industrial applications, neutron radiography has become a powerful tool for non-destructive investigation techniques. However, resulted from a combined effect of neutron flux, collimated beam, limited spatial resolution of detector and scattering, etc., the images made with neutrons are degraded severely by blur and noise. For dealing with it, by integrating steering kernel regression into Richardson–Lucy approach, we present a novel restoration method in this paper, which is capable of suppressing noise while restoring details of the blurred imaging result efficiently. Experimental results show that compared with the other methods, the proposed method can improve the restoration quality both visually and quantitatively

  7. Online learning control using adaptive critic designs with sparse kernel machines.

    Science.gov (United States)

    Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo

    2013-05-01

    In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.

  8. Wheat kernel dimensions: how do they contribute to kernel weight at ...

    Indian Academy of Sciences (India)

    2011-12-02

    Dec 2, 2011 ... yield components, is greatly influenced by kernel dimensions. (KD), such as ..... six linkage gaps, and it covered 3010.70 cM of the whole genome with an ...... Ersoz E. et al. 2009 The Genetic architecture of maize flowering.

  9. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  10. Coherence factors in a high-tc cuprate probed by quasi-particle scattering off vortices.

    Science.gov (United States)

    Hanaguri, T; Kohsaka, Y; Ono, M; Maltseva, M; Coleman, P; Yamada, I; Azuma, M; Takano, M; Ohishi, K; Takagi, H

    2009-02-13

    When electrons pair in a superconductor, quasi-particles develop an acute sensitivity to different types of scattering potential that is described by the appearance of coherence factors in the scattering amplitudes. Although the effects of coherence factors are well established in isotropic superconductors, they are much harder to detect in their anisotropic counterparts, such as high-superconducting-transition-temperature cuprates. We demonstrate an approach that highlights the momentum-dependent coherence factors in Ca2-xNaxCuO2Cl2. We used Fourier-transform scanning tunneling spectroscopy to reveal a magnetic-field dependence in quasi-particle scattering interference patterns that is sensitive to the sign of the anisotropic gap. This result is associated with the d-wave coherence factors and quasi-particle scattering off vortices. Our technique thus provides insights into the nature of electron pairing as well as quasi-particle scattering processes in unconventional superconductors.

  11. Exposure buildup factors for a cobalt-60 point isotropic source for single and two layer slabs

    International Nuclear Information System (INIS)

    Chakarova, R.

    1992-01-01

    Exposure buildup factors for point isotropic cobalt-60 sources are calculated by the Monte Carlo method with statistical errors ranging from 1.5 to 7% for 1-5 mean free paths (mfp) thick water and iron single slabs and for 1 and 2 mfp iron layers followed by water layers 1-5 mfp thick. The computations take into account Compton scattering. The Monte Carlo data for single slab geometries are approximated by Geometric Progression formula. Kalos's formula using the calculated single slab buildup factors may be applied to reproduce the data for two-layered slabs. The presented results and discussion may help when choosing the manner in which the radiation field gamma irradiation units will be described. (author)

  12. Using the Intel Math Kernel Library on Peregrine | High-Performance

    Science.gov (United States)

    Computing | NREL the Intel Math Kernel Library on Peregrine Using the Intel Math Kernel Library on Peregrine Learn how to use the Intel Math Kernel Library (MKL) with Peregrine system software. MKL architectures. Core math functions in MKL include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier

  13. Integral transport theory for charged particles in electric and magnetic fields

    International Nuclear Information System (INIS)

    Boffi, V.C.; Molinari, V.G.

    1979-01-01

    An integral transport theory for charged particles which, in the presence of electric and magnetic fields, diffuse by collisions against the atoms (or molecules) of a host medium is proposed. The combined effects of both the external fields and the mechanisms of scattering, removal and creation in building up the distribution function of the charged particles considered are investigated. The eigenvalue problem associated with the sourceless case of the given physical situation is also commented. Applications of the theory to a purely velocity-dependent problem and to a space-dependent problem, respectively, are illustrated for the case of a separable isotropic scattering kernel of synthetic type. Calculations of the distribution function, of the total current density and of relevant electrical conductivity are then carried out for different specializations of the external fields. (author)

  14. Protein fold recognition using geometric kernel data fusion.

    Science.gov (United States)

    Zakeri, Pooya; Jeuris, Ben; Vandebril, Raf; Moreau, Yves

    2014-07-01

    Various approaches based on features extracted from protein sequences and often machine learning methods have been used in the prediction of protein folds. Finding an efficient technique for integrating these different protein features has received increasing attention. In particular, kernel methods are an interesting class of techniques for integrating heterogeneous data. Various methods have been proposed to fuse multiple kernels. Most techniques for multiple kernel learning focus on learning a convex linear combination of base kernels. In addition to the limitation of linear combinations, working with such approaches could cause a loss of potentially useful information. We design several techniques to combine kernel matrices by taking more involved, geometry inspired means of these matrices instead of convex linear combinations. We consider various sequence-based protein features including information extracted directly from position-specific scoring matrices and local sequence alignment. We evaluate our methods for classification on the SCOP PDB-40D benchmark dataset for protein fold recognition. The best overall accuracy on the protein fold recognition test set obtained by our methods is ∼ 86.7%. This is an improvement over the results of the best existing approach. Moreover, our computational model has been developed by incorporating the functional domain composition of proteins through a hybridization model. It is observed that by using our proposed hybridization model, the protein fold recognition accuracy is further improved to 89.30%. Furthermore, we investigate the performance of our approach on the protein remote homology detection problem by fusing multiple string kernels. The MATLAB code used for our proposed geometric kernel fusion frameworks are publicly available at http://people.cs.kuleuven.be/∼raf.vandebril/homepage/software/geomean.php?menu=5/. © The Author 2014. Published by Oxford University Press.

  15. Unsupervised multiple kernel learning for heterogeneous data integration.

    Science.gov (United States)

    Mariette, Jérôme; Villa-Vialaneix, Nathalie

    2018-03-15

    Recent high-throughput sequencing advances have expanded the breadth of available omics datasets and the integrated analysis of multiple datasets obtained on the same samples has allowed to gain important insights in a wide range of applications. However, the integration of various sources of information remains a challenge for systems biology since produced datasets are often of heterogeneous types, with the need of developing generic methods to take their different specificities into account. We propose a multiple kernel framework that allows to integrate multiple datasets of various types into a single exploratory analysis. Several solutions are provided to learn either a consensus meta-kernel or a meta-kernel that preserves the original topology of the datasets. We applied our framework to analyse two public multi-omics datasets. First, the multiple metagenomic datasets, collected during the TARA Oceans expedition, was explored to demonstrate that our method is able to retrieve previous findings in a single kernel PCA as well as to provide a new image of the sample structures when a larger number of datasets are included in the analysis. To perform this analysis, a generic procedure is also proposed to improve the interpretability of the kernel PCA in regards with the original data. Second, the multi-omics breast cancer datasets, provided by The Cancer Genome Atlas, is analysed using a kernel Self-Organizing Maps with both single and multi-omics strategies. The comparison of these two approaches demonstrates the benefit of our integration method to improve the representation of the studied biological system. Proposed methods are available in the R package mixKernel, released on CRAN. It is fully compatible with the mixOmics package and a tutorial describing the approach can be found on mixOmics web site http://mixomics.org/mixkernel/. jerome.mariette@inra.fr or nathalie.villa-vialaneix@inra.fr. Supplementary data are available at Bioinformatics online.

  16. Kernel bundle EPDiff

    DEFF Research Database (Denmark)

    Sommer, Stefan Horst; Lauze, Francois Bernard; Nielsen, Mads

    2011-01-01

    In the LDDMM framework, optimal warps for image registration are found as end-points of critical paths for an energy functional, and the EPDiff equations describe the evolution along such paths. The Large Deformation Diffeomorphic Kernel Bundle Mapping (LDDKBM) extension of LDDMM allows scale space...

  17. Effect of finite sample dimensions and total scatter acceptance angle on the gamma ray buildup factor

    International Nuclear Information System (INIS)

    Singh, Sukhpal; Kumar, Ashok; Singh, Charanjeet; Thind, Kulwant Singh; Mudahar, Gurmel S.

    2008-01-01

    The simultaneous variation of gamma ray buildup factors with absorber thickness (up to 6.5 mfp) and total scatter acceptance angle (which is the sum of incidence and exit beam divergence) in the media of high volume flyash concrete and water was studied experimentally using a point isotropic 137 Cs source

  18. Proteome analysis of the almond kernel (Prunus dulcis).

    Science.gov (United States)

    Li, Shugang; Geng, Fang; Wang, Ping; Lu, Jiankang; Ma, Meihu

    2016-08-01

    Almond (Prunus dulcis) is a popular tree nut worldwide and offers many benefits to human health. However, the importance of almond kernel proteins in the nutrition and function in human health requires further evaluation. The present study presents a systematic evaluation of the proteins in the almond kernel using proteomic analysis. The nutrient and amino acid content in almond kernels from Xinjiang is similar to that of American varieties; however, Xinjiang varieties have a higher protein content. Two-dimensional electrophoresis analysis demonstrated a wide distribution of molecular weights and isoelectric points of almond kernel proteins. A total of 434 proteins were identified by LC-MS/MS, and most were proteins that were experimentally confirmed for the first time. Gene ontology (GO) analysis of the 434 proteins indicated that proteins involved in primary biological processes including metabolic processes (67.5%), cellular processes (54.1%), and single-organism processes (43.4%), the main molecular function of almond kernel proteins are in catalytic activity (48.0%), binding (45.4%) and structural molecule activity (11.9%), and proteins are primarily distributed in cell (59.9%), organelle (44.9%), and membrane (22.8%). Almond kernel is a source of a wide variety of proteins. This study provides important information contributing to the screening and identification of almond proteins, the understanding of almond protein function, and the development of almond protein products. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.

  19. Localized Multiple Kernel Learning Via Sample-Wise Alternating Optimization.

    Science.gov (United States)

    Han, Yina; Yang, Kunde; Ma, Yuanliang; Liu, Guizhong

    2014-01-01

    Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for l1-norm) or with closed-form solutions (for lp-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

  20. Control Transfer in Operating System Kernels

    Science.gov (United States)

    1994-05-13

    microkernel system that runs less code in the kernel address space. To realize the performance benefit of allocating stacks in unmapped kseg0 memory, the...review how I modified the Mach 3.0 kernel to use continuations. Because of Mach’s message-passing microkernel structure, interprocess communication was...critical control transfer paths, deeply- nested call chains are undesirable in any case because of the function call overhead. 4.1.3 Microkernel Operating

  1. Bivariate discrete beta Kernel graduation of mortality data.

    Science.gov (United States)

    Mazza, Angelo; Punzo, Antonio

    2015-07-01

    Various parametric/nonparametric techniques have been proposed in literature to graduate mortality data as a function of age. Nonparametric approaches, as for example kernel smoothing regression, are often preferred because they do not assume any particular mortality law. Among the existing kernel smoothing approaches, the recently proposed (univariate) discrete beta kernel smoother has been shown to provide some benefits. Bivariate graduation, over age and calendar years or durations, is common practice in demography and actuarial sciences. In this paper, we generalize the discrete beta kernel smoother to the bivariate case, and we introduce an adaptive bandwidth variant that may provide additional benefits when data on exposures to the risk of death are available; furthermore, we outline a cross-validation procedure for bandwidths selection. Using simulations studies, we compare the bivariate approach proposed here with its corresponding univariate formulation and with two popular nonparametric bivariate graduation techniques, based on Epanechnikov kernels and on P-splines. To make simulations realistic, a bivariate dataset, based on probabilities of dying recorded for the US males, is used. Simulations have confirmed the gain in performance of the new bivariate approach with respect to both the univariate and the bivariate competitors.

  2. A framework for optimal kernel-based manifold embedding of medical image data.

    Science.gov (United States)

    Zimmer, Veronika A; Lekadir, Karim; Hoogendoorn, Corné; Frangi, Alejandro F; Piella, Gemma

    2015-04-01

    Kernel-based dimensionality reduction is a widely used technique in medical image analysis. To fully unravel the underlying nonlinear manifold the selection of an adequate kernel function and of its free parameters is critical. In practice, however, the kernel function is generally chosen as Gaussian or polynomial and such standard kernels might not always be optimal for a given image dataset or application. In this paper, we present a study on the effect of the kernel functions in nonlinear manifold embedding of medical image data. To this end, we first carry out a literature review on existing advanced kernels developed in the statistics, machine learning, and signal processing communities. In addition, we implement kernel-based formulations of well-known nonlinear dimensional reduction techniques such as Isomap and Locally Linear Embedding, thus obtaining a unified framework for manifold embedding using kernels. Subsequently, we present a method to automatically choose a kernel function and its associated parameters from a pool of kernel candidates, with the aim to generate the most optimal manifold embeddings. Furthermore, we show how the calculated selection measures can be extended to take into account the spatial relationships in images, or used to combine several kernels to further improve the embedding results. Experiments are then carried out on various synthetic and phantom datasets for numerical assessment of the methods. Furthermore, the workflow is applied to real data that include brain manifolds and multispectral images to demonstrate the importance of the kernel selection in the analysis of high-dimensional medical images. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Fast-switching optically isotropic liquid crystal nano-droplets with improved depolarization and Kerr effect by doping high k nanoparticles.

    Science.gov (United States)

    Kim, Byeonggon; Kim, Hyun Gyu; Shim, Gyu-Yeop; Park, Ji-Sub; Joo, Kyung-Il; Lee, Dong-Jin; Lee, Joun-Ho; Baek, Ji-Ho; Kim, Byeong Koo; Choi, Yoonseuk; Kim, Hak-Rin

    2018-01-10

    We proposed and analyzed an optically isotropic nano-droplet liquid crystal (LC) doped with high k nanoparticles (NPs), exhibiting enhanced Kerr effects, which could be operated with reduced driving voltages. For enhancing the contrast ratio together with the light efficiencies, the LC droplet sizes were adjusted to be shorter than the wavelength of visible light to reduce depolarization effects by optical scattering of the LC droplets. Based on the optical analysis of the depolarization effects, the influence of the relationship between the LC droplet size and the NP doping ratio on the Kerr effect change was investigated.

  4. Measurement of Weight of Kernels in a Simulated Cylindrical Fuel Compact for HTGR

    International Nuclear Information System (INIS)

    Kim, Woong Ki; Lee, Young Woo; Kim, Young Min; Kim, Yeon Ku; Eom, Sung Ho; Jeong, Kyung Chai; Cho, Moon Sung; Cho, Hyo Jin; Kim, Joo Hee

    2011-01-01

    The TRISO-coated fuel particle for the high temperature gas-cooled reactor (HTGR) is composed of a nuclear fuel kernel and outer coating layers. The coated particles are mixed with graphite matrix to make HTGR fuel element. The weight of fuel kernels in an element is generally measured by the chemical analysis or a gamma-ray spectrometer. Although it is accurate to measure the weight of kernels by the chemical analysis, the samples used in the analysis cannot be put again in the fabrication process. Furthermore, radioactive wastes are generated during the inspection procedure. The gamma-ray spectrometer requires an elaborate reference sample to reduce measurement errors induced from the different geometric shape of test sample from that of reference sample. X-ray computed tomography (CT) is an alternative to measure the weight of kernels in a compact nondestructively. In this study, X-ray CT is applied to measure the weight of kernels in a cylindrical compact containing simulated TRISO-coated particles with ZrO 2 kernels. The volume of kernels as well as the number of kernels in the simulated compact is measured from the 3-D density information. The weight of kernels was calculated from the volume of kernels or the number of kernels. Also, the weight of kernels was measured by extracting the kernels from a compact to review the result of the X-ray CT application

  5. 3-D waveform tomography sensitivity kernels for anisotropic media

    KAUST Repository

    Djebbi, Ramzi

    2014-01-01

    The complications in anisotropic multi-parameter inversion lie in the trade-off between the different anisotropy parameters. We compute the tomographic waveform sensitivity kernels for a VTI acoustic medium perturbation as a tool to investigate this ambiguity between the different parameters. We use dynamic ray tracing to efficiently handle the expensive computational cost for 3-D anisotropic models. Ray tracing provides also the ray direction information necessary for conditioning the sensitivity kernels to handle anisotropy. The NMO velocity and η parameter kernels showed a maximum sensitivity for diving waves which results in a relevant choice of those parameters in wave equation tomography. The δ parameter kernel showed zero sensitivity; therefore it can serve as a secondary parameter to fit the amplitude in the acoustic anisotropic inversion. Considering the limited penetration depth of diving waves, migration velocity analysis based kernels are introduced to fix the depth ambiguity with reflections and compute sensitivity maps in the deeper parts of the model.

  6. A Fourier-series-based kernel-independent fast multipole method

    International Nuclear Information System (INIS)

    Zhang Bo; Huang Jingfang; Pitsianis, Nikos P.; Sun Xiaobai

    2011-01-01

    We present in this paper a new kernel-independent fast multipole method (FMM), named as FKI-FMM, for pairwise particle interactions with translation-invariant kernel functions. FKI-FMM creates, using numerical techniques, sufficiently accurate and compressive representations of a given kernel function over multi-scale interaction regions in the form of a truncated Fourier series. It provides also economic operators for the multipole-to-multipole, multipole-to-local, and local-to-local translations that are typical and essential in the FMM algorithms. The multipole-to-local translation operator, in particular, is readily diagonal and does not dominate in arithmetic operations. FKI-FMM provides an alternative and competitive option, among other kernel-independent FMM algorithms, for an efficient application of the FMM, especially for applications where the kernel function consists of multi-physics and multi-scale components as those arising in recent studies of biological systems. We present the complexity analysis and demonstrate with experimental results the FKI-FMM performance in accuracy and efficiency.

  7. Resummed memory kernels in generalized system-bath master equations

    International Nuclear Information System (INIS)

    Mavros, Michael G.; Van Voorhis, Troy

    2014-01-01

    Generalized master equations provide a concise formalism for studying reduced population dynamics. Usually, these master equations require a perturbative expansion of the memory kernels governing the dynamics; in order to prevent divergences, these expansions must be resummed. Resummation techniques of perturbation series are ubiquitous in physics, but they have not been readily studied for the time-dependent memory kernels used in generalized master equations. In this paper, we present a comparison of different resummation techniques for such memory kernels up to fourth order. We study specifically the spin-boson Hamiltonian as a model system bath Hamiltonian, treating the diabatic coupling between the two states as a perturbation. A novel derivation of the fourth-order memory kernel for the spin-boson problem is presented; then, the second- and fourth-order kernels are evaluated numerically for a variety of spin-boson parameter regimes. We find that resumming the kernels through fourth order using a Padé approximant results in divergent populations in the strong electronic coupling regime due to a singularity introduced by the nature of the resummation, and thus recommend a non-divergent exponential resummation (the “Landau-Zener resummation” of previous work). The inclusion of fourth-order effects in a Landau-Zener-resummed kernel is shown to improve both the dephasing rate and the obedience of detailed balance over simpler prescriptions like the non-interacting blip approximation, showing a relatively quick convergence on the exact answer. The results suggest that including higher-order contributions to the memory kernel of a generalized master equation and performing an appropriate resummation can provide a numerically-exact solution to system-bath dynamics for a general spectral density, opening the way to a new class of methods for treating system-bath dynamics

  8. The dipole form of the gluon part of the BFKL kernel

    International Nuclear Information System (INIS)

    Fadin, V.S.; Fiore, R.; Grabovsky, A.V.; Papa, A.

    2007-01-01

    The dipole form of the gluon part of the color singlet BFKL kernel in the next-to-leading order (NLO) is obtained in the coordinate representation by direct transfer from the momentum representation, where the kernel was calculated before. With this paper the transformation of the NLO BFKL kernel to the dipole form, started a few months ago with the quark part of the kernel, is completed

  9. Improving prediction of heterodimeric protein complexes using combination with pairwise kernel.

    Science.gov (United States)

    Ruan, Peiying; Hayashida, Morihiro; Akutsu, Tatsuya; Vert, Jean-Philippe

    2018-02-19

    Since many proteins become functional only after they interact with their partner proteins and form protein complexes, it is essential to identify the sets of proteins that form complexes. Therefore, several computational methods have been proposed to predict complexes from the topology and structure of experimental protein-protein interaction (PPI) network. These methods work well to predict complexes involving at least three proteins, but generally fail at identifying complexes involving only two different proteins, called heterodimeric complexes or heterodimers. There is however an urgent need for efficient methods to predict heterodimers, since the majority of known protein complexes are precisely heterodimers. In this paper, we use three promising kernel functions, Min kernel and two pairwise kernels, which are Metric Learning Pairwise Kernel (MLPK) and Tensor Product Pairwise Kernel (TPPK). We also consider the normalization forms of Min kernel. Then, we combine Min kernel or its normalization form and one of the pairwise kernels by plugging. We applied kernels based on PPI, domain, phylogenetic profile, and subcellular localization properties to predicting heterodimers. Then, we evaluate our method by employing C-Support Vector Classification (C-SVC), carrying out 10-fold cross-validation, and calculating the average F-measures. The results suggest that the combination of normalized-Min-kernel and MLPK leads to the best F-measure and improved the performance of our previous work, which had been the best existing method so far. We propose new methods to predict heterodimers, using a machine learning-based approach. We train a support vector machine (SVM) to discriminate interacting vs non-interacting protein pairs, based on informations extracted from PPI, domain, phylogenetic profiles and subcellular localization. We evaluate in detail new kernel functions to encode these data, and report prediction performance that outperforms the state-of-the-art.

  10. A new discrete dipole kernel for quantitative susceptibility mapping.

    Science.gov (United States)

    Milovic, Carlos; Acosta-Cabronero, Julio; Pinto, José Miguel; Mattern, Hendrik; Andia, Marcelo; Uribe, Sergio; Tejos, Cristian

    2018-09-01

    Most approaches for quantitative susceptibility mapping (QSM) are based on a forward model approximation that employs a continuous Fourier transform operator to solve a differential equation system. Such formulation, however, is prone to high-frequency aliasing. The aim of this study was to reduce such errors using an alternative dipole kernel formulation based on the discrete Fourier transform and discrete operators. The impact of such an approach on forward model calculation and susceptibility inversion was evaluated in contrast to the continuous formulation both with synthetic phantoms and in vivo MRI data. The discrete kernel demonstrated systematically better fits to analytic field solutions, and showed less over-oscillations and aliasing artifacts while preserving low- and medium-frequency responses relative to those obtained with the continuous kernel. In the context of QSM estimation, the use of the proposed discrete kernel resulted in error reduction and increased sharpness. This proof-of-concept study demonstrated that discretizing the dipole kernel is advantageous for QSM. The impact on small or narrow structures such as the venous vasculature might by particularly relevant to high-resolution QSM applications with ultra-high field MRI - a topic for future investigations. The proposed dipole kernel has a straightforward implementation to existing QSM routines. Copyright © 2018 Elsevier Inc. All rights reserved.

  11. Genetic Analysis of Kernel Traits in Maize-Teosinte Introgression Populations

    Directory of Open Access Journals (Sweden)

    Zhengbin Liu

    2016-08-01

    Full Text Available Seed traits have been targeted by human selection during the domestication of crop species as a way to increase the caloric and nutritional content of food during the transition from hunter-gather to early farming societies. The primary seed trait under selection was likely seed size/weight as it is most directly related to overall grain yield. Additional seed traits involved in seed shape may have also contributed to larger grain. Maize (Zea mays ssp. mays kernel weight has increased more than 10-fold in the 9000 years since domestication from its wild ancestor, teosinte (Z. mays ssp. parviglumis. In order to study how size and shape affect kernel weight, we analyzed kernel morphometric traits in a set of 10 maize-teosinte introgression populations using digital imaging software. We identified quantitative trait loci (QTL for kernel area and length with moderate allelic effects that colocalize with kernel weight QTL. Several genomic regions with strong effects during maize domestication were detected, and a genetic framework for kernel traits was characterized by complex pleiotropic interactions. Our results both confirm prior reports of kernel domestication loci and identify previously uncharacterized QTL with a range of allelic effects, enabling future research into the genetic basis of these traits.

  12. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    Energy Technology Data Exchange (ETDEWEB)

    Khazaee, M [shahid beheshti university, Tehran, Tehran (Iran, Islamic Republic of); Asl, A Kamali [Shahid Beheshti University, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of); Geramifar, P [Shariati Hospital, Tehran, Iran., Tehran, Tehran (Iran, Islamic Republic of)

    2015-06-15

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine.

  13. SU-E-T-154: Calculation of Tissue Dose Point Kernels Using GATE Monte Carlo Simulation Toolkit to Compare with Water Dose Point Kernel

    International Nuclear Information System (INIS)

    Khazaee, M; Asl, A Kamali; Geramifar, P

    2015-01-01

    Purpose: the objective of this study was to assess utilizing water dose point kernel (DPK)instead of tissue dose point kernels in convolution algorithms.to the best of our knowledge, in providing 3D distribution of absorbed dose from a 3D distribution of the activity, the human body is considered equivalent to water. as a Result tissue variations are not considered in patient specific dosimetry. Methods: In this study Gate v7.0 was used to calculate tissue dose point kernel. the beta emitter radionuclides which have taken into consideration in this simulation include Y-90, Lu-177 and P-32 which are commonly used in nuclear medicine. the comparison has been performed for dose point kernels of adipose, bone, breast, heart, intestine, kidney, liver, lung and spleen versus water dose point kernel. Results: In order to validate the simulation the Result of 90Y DPK in water were compared with published results of Papadimitroulas et al (Med. Phys., 2012). The results represented that the mean differences between water DPK and other soft tissues DPKs range between 0.6 % and 1.96% for 90Y, except for lung and bone, where the observed discrepancies are 6.3% and 12.19% respectively. The range of DPK difference for 32P is between 1.74% for breast and 18.85% for bone. For 177Lu, the highest difference belongs to bone which is equal to 16.91%. For other soft tissues the least discrepancy is observed in kidney with 1.68%. Conclusion: In all tissues except for lung and bone, the results of GATE for dose point kernel were comparable to water dose point kernel which demonstrates the appropriateness of applying water dose point kernel instead of soft tissues in the field of nuclear medicine

  14. Scientific opinion on the acute health risks related to the presence of cyanogenic glycosides in raw apricot kernels and products derived from raw apricot kernels

    DEFF Research Database (Denmark)

    Petersen, Annette

    of kernels promoted (10 and 60 kernels/day for the general population and cancer patients, respectively), exposures exceeded the ARfD 17–413 and 3–71 times in toddlers and adults, respectively. The estimated maximum quantity of apricot kernels (or raw apricot material) that can be consumed without exceeding...

  15. On a hierarchical construction of the anisotropic LTSN solution from the isotropic LTSN solution

    International Nuclear Information System (INIS)

    Foletto, Taline; Segatto, Cynthia F.; Bodmann, Bardo E.; Vilhena, Marco T.

    2015-01-01

    In this work, we present a recursive scheme targeting the hierarchical construction of anisotropic LTS N solution from the isotropic LTS N solution. The main idea relies in the decomposition of the associated LTS N anisotropic matrix as a sum of two matrices in which one matrix contains the isotropic and the other anisotropic part of the problem. The matrix containing the anisotropic part is considered as the source of the isotropic problem. The solution of this problem is made by the decomposition of the angular flux as a truncated series of intermediate functions and replace in the isotropic equation. After the replacement of these into the split isotropic equation, we construct a set of isotropic recursive problems, that are readily solved by the classic LTS N isotropic method. We apply this methodology to solve problems considering homogeneous and heterogeneous anisotropic regions. Numerical results are presented and compared with the classical LTS N anisotropic solution. (author)

  16. Kernel Function Tuning for Single-Layer Neural Networks

    Czech Academy of Sciences Publication Activity Database

    Vidnerová, Petra; Neruda, Roman

    -, accepted 28.11. 2017 (2018) ISSN 2278-0149 R&D Projects: GA ČR GA15-18108S Institutional support: RVO:67985807 Keywords : single-layer neural networks * kernel methods * kernel function * optimisation Subject RIV: IN - Informatics, Computer Science http://www.ijmerr.com/

  17. The isotropic local Wigner-Seitz model: An accurate theoretical model for the quasi-free electron energy in fluids

    Science.gov (United States)

    Evans, Cherice; Findley, Gary L.

    The quasi-free electron energy V0 (ρ) is important in understanding electron transport through a fluid, as well as for modeling electron attachment reactions in fluids. Our group has developed an isotropic local Wigner-Seitz model that allows one to successfully calculate the quasi-free electron energy for a variety of atomic and molecular fluids from low density to the density of the triple point liquid with only a single adjustable parameter. This model, when coupled with the quasi-free electron energy data and the thermodynamic data for the fluids, also can yield optimized intermolecular potential parameters and the zero kinetic energy electron scattering length. In this poster, we give a review of the isotropic local Wigner-Seitz model in comparison to previous theoretical models for the quasi-free electron energy. All measurements were performed at the University of Wisconsin Synchrotron Radiation Center. This work was supported by a Grants from the National Science Foundation (NSF CHE-0956719), the Petroleum Research Fund (45728-B6 and 5-24880), the Louisiana Board of Regents Support Fund (LEQSF(2006-09)-RD-A33), and the Professional Staff Congress City University of New York.

  18. Broken rice kernels and the kinetics of rice hydration and texture during cooking.

    Science.gov (United States)

    Saleh, Mohammed; Meullenet, Jean-Francois

    2013-05-01

    During rice milling and processing, broken kernels are inevitably present, although to date it has been unclear as to how the presence of broken kernels affects rice hydration and cooked rice texture. Therefore, this work intended to study the effect of broken kernels in a rice sample on rice hydration and texture during cooking. Two medium-grain and two long-grain rice cultivars were harvested, dried and milled, and the broken kernels were separated from unbroken kernels. Broken rice kernels were subsequently combined with unbroken rice kernels forming treatments of 0, 40, 150, 350 or 1000 g kg(-1) broken kernels ratio. Rice samples were then cooked and the moisture content of the cooked rice, the moisture uptake rate, and rice hardness and stickiness were measured. As the amount of broken rice kernels increased, rice sample texture became increasingly softer (P hardness was negatively correlated to the percentage of broken kernels in rice samples. Differences in the proportions of broken rice in a milled rice sample play a major role in determining the texture properties of cooked rice. Variations in the moisture migration kinetics between broken and unbroken kernels caused faster hydration of the cores of broken rice kernels, with greater starch leach-out during cooking affecting the texture of the cooked rice. The texture of cooked rice can be controlled, to some extent, by varying the proportion of broken kernels in milled rice. © 2012 Society of Chemical Industry.

  19. Scattering from a PEC Slightly Rough Surface in Chiral Media

    Directory of Open Access Journals (Sweden)

    Haroon Akhtar Qureshi

    2018-01-01

    Full Text Available The scattering of left circularly polarized wave from a perfectly electric conducting (PEC rough surface in isotropic chiral media is investigated. Since a slightly rough interface is assumed, the solution is obtained using perturbation method. Zeroth-order term corresponds to solution for a flat interface which helps in making a comparison with the results reported in the literature. First-order term gives the contribution from the surface perturbations, and it is used to define incoherent bistatic scattering coefficients for a Gaussian rough surface. Higher order solution is obtained in a recursive manner. Numerical results are reported for different values of chirality, correlation length, and rms height of the surface. Diffraction efficiency is defined for a sinusoidal grating.

  20. Local coding based matching kernel method for image classification.

    Directory of Open Access Journals (Sweden)

    Yan Song

    Full Text Available This paper mainly focuses on how to effectively and efficiently measure visual similarity for local feature based representation. Among existing methods, metrics based on Bag of Visual Word (BoV techniques are efficient and conceptually simple, at the expense of effectiveness. By contrast, kernel based metrics are more effective, but at the cost of greater computational complexity and increased storage requirements. We show that a unified visual matching framework can be developed to encompass both BoV and kernel based metrics, in which local kernel plays an important role between feature pairs or between features and their reconstruction. Generally, local kernels are defined using Euclidean distance or its derivatives, based either explicitly or implicitly on an assumption of Gaussian noise. However, local features such as SIFT and HoG often follow a heavy-tailed distribution which tends to undermine the motivation behind Euclidean metrics. Motivated by recent advances in feature coding techniques, a novel efficient local coding based matching kernel (LCMK method is proposed. This exploits the manifold structures in Hilbert space derived from local kernels. The proposed method combines advantages of both BoV and kernel based metrics, and achieves a linear computational complexity. This enables efficient and scalable visual matching to be performed on large scale image sets. To evaluate the effectiveness of the proposed LCMK method, we conduct extensive experiments with widely used benchmark datasets, including 15-Scenes, Caltech101/256, PASCAL VOC 2007 and 2011 datasets. Experimental results confirm the effectiveness of the relatively efficient LCMK method.

  1. Cosmic recall and the scattering picture of loop quantum cosmology

    International Nuclear Information System (INIS)

    Kaminski, Wojciech; Pawlowski, Tomasz

    2010-01-01

    The global dynamics of a homogeneous Universe in loop quantum cosmology is viewed as a scattering process of its geometrodynamical equivalent. This picture is applied to build a flexible (easy to generalize) and not restricted just to exactly solvable models method of verifying the preservation of the semiclassicality through the bounce. The devised method is next applied to two simple examples: (i) the isotropic Friedmann-Robertson-Walker universe, and (ii) the isotropic sector of the Bianchi I model. For both of them we show that the dispersions in the logarithm of the volume ln(v) and scalar field momentum ln(p φ ) in the distant future and past are related via strong triangle inequalities. This implies, in particular, a strict preservation of the semiclassicality (in considered degrees of freedom) in both the cases (i) and (ii). Derived inequalities are general: valid for all the physical states within the considered models.

  2. Multivariate realised kernels

    DEFF Research Database (Denmark)

    Barndorff-Nielsen, Ole; Hansen, Peter Reinhard; Lunde, Asger

    We propose a multivariate realised kernel to estimate the ex-post covariation of log-prices. We show this new consistent estimator is guaranteed to be positive semi-definite and is robust to measurement noise of certain types and can also handle non-synchronous trading. It is the first estimator...

  3. Process for producing metal oxide kernels and kernels so obtained

    International Nuclear Information System (INIS)

    Lelievre, Bernard; Feugier, Andre.

    1974-01-01

    The process desbribed is for producing fissile or fertile metal oxide kernels used in the fabrication of fuels for high temperature nuclear reactors. This process consists in adding to an aqueous solution of at least one metallic salt, particularly actinide nitrates, at least one chemical compound capable of releasing ammonia, in dispersing drop by drop the solution thus obtained into a hot organic phase to gel the drops and transform them into solid particles. These particles are then washed, dried and treated to turn them into oxide kernels. The organic phase used for the gel reaction is formed of a mixture composed of two organic liquids, one acting as solvent and the other being a product capable of extracting the anions from the metallic salt of the drop at the time of gelling. Preferably an amine is used as product capable of extracting the anions. Additionally, an alcohol that causes a part dehydration of the drops can be employed as solvent, thus helping to increase the resistance of the particles [fr

  4. Geodesic exponential kernels: When Curvature and Linearity Conflict

    DEFF Research Database (Denmark)

    Feragen, Aase; Lauze, François; Hauberg, Søren

    2015-01-01

    manifold, the geodesic Gaussian kernel is only positive definite if the Riemannian manifold is Euclidean. This implies that any attempt to design geodesic Gaussian kernels on curved Riemannian manifolds is futile. However, we show that for spaces with conditionally negative definite distances the geodesic...

  5. Real time kernel performance monitoring with SystemTap

    CERN Multimedia

    CERN. Geneva

    2018-01-01

    SystemTap is a dynamic method of monitoring and tracing the operation of a running Linux kernel. In this talk I will present a few practical use cases where SystemTap allowed me to turn otherwise complex userland monitoring tasks in simple kernel probes.

  6. Comparative Analysis of Kernel Methods for Statistical Shape Learning

    National Research Council Canada - National Science Library

    Rathi, Yogesh; Dambreville, Samuel; Tannenbaum, Allen

    2006-01-01

    .... In this work, we perform a comparative analysis of shape learning techniques such as linear PCA, kernel PCA, locally linear embedding and propose a new method, kernelized locally linear embedding...

  7. Brillouin scattering, piezobirefringence, and dispersion of photoelastic coefficients of CdS and ZnO

    DEFF Research Database (Denmark)

    Berkowicz, R.; Skettrup, Torben

    1975-01-01

    We have measured the dispersion of the Brillouin scattering from acoustoelectrical domains in CdS and ZnO. These spectra are compared with the birefringence spectra obtained by applying uniaxial stress. The resonant cancellation of the Brillouin scattering occurs at the spectral position of the i......We have measured the dispersion of the Brillouin scattering from acoustoelectrical domains in CdS and ZnO. These spectra are compared with the birefringence spectra obtained by applying uniaxial stress. The resonant cancellation of the Brillouin scattering occurs at the spectral position...... of the isotropic point of the stress-induced birefringence. From these spectra it is concluded that the Brillouin scattering in CdS and ZnO is determined by elasto-optic effects alone. The spectra of some of the photoelastic coefficients have been determined. A model dielectric constant is derived where both....... It is found that the exchange interaction between the excitons may change the values of the photoelastic coefficients in ZnO about 10%....

  8. Semi-supervised learning for ordinal Kernel Discriminant Analysis.

    Science.gov (United States)

    Pérez-Ortiz, M; Gutiérrez, P A; Carbonero-Ruz, M; Hervás-Martínez, C

    2016-12-01

    Ordinal classification considers those classification problems where the labels of the variable to predict follow a given order. Naturally, labelled data is scarce or difficult to obtain in this type of problems because, in many cases, ordinal labels are given by a user or expert (e.g. in recommendation systems). Firstly, this paper develops a new strategy for ordinal classification where both labelled and unlabelled data are used in the model construction step (a scheme which is referred to as semi-supervised learning). More specifically, the ordinal version of kernel discriminant learning is extended for this setting considering the neighbourhood information of unlabelled data, which is proposed to be computed in the feature space induced by the kernel function. Secondly, a new method for semi-supervised kernel learning is devised in the context of ordinal classification, which is combined with our developed classification strategy to optimise the kernel parameters. The experiments conducted compare 6 different approaches for semi-supervised learning in the context of ordinal classification in a battery of 30 datasets, showing (1) the good synergy of the ordinal version of discriminant analysis and the use of unlabelled data and (2) the advantage of computing distances in the feature space induced by the kernel function. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. REDUCED ISOTROPIC CRYSTAL MODEL WITH RESPECT TO THE FOURTH-ORDER ELASTIC MODULI

    Directory of Open Access Journals (Sweden)

    O. Burlayenko

    2018-04-01

    Full Text Available Using a reduced isotropic crystal model the relationship between the fourth-order elastic moduli of an isotropic medium and the independent components of the fourth-order elastic moduli tensor of real crystals of various crystal systems is found. To calculate the coefficients of these relations, computer algebra systems Redberry and Mathematica for working with high order tensors in the symbolic and explicit form were used, in light of the overly complex computation. In an isotropic medium, there are four independent fourth order elastic moduli. This is due to the presence of four invariants for an eighth-rank tensor in the three-dimensional space, that has symmetries over the pairs of indices. As an example, the moduli of elasticity of an isotropic medium corresponding to certain crystals of cubic system are given (LiF, NaCl, MgO, CaF2. From the obtained results it can be seen that the reduced isotropic crystal model can be most effectively applied to high-symmetry crystal systems.

  10. A finite-density calculation of the surface tension of isotropic-nematic interfaces

    International Nuclear Information System (INIS)

    Moore, B.G.; McMullen, W.E.

    1992-01-01

    The surface tension of the isotropic-nematic interface in a fluid of intermediate-sized hard particles is studied and calculated. The transition from isotropic to nematic is fixed to occur in a continuous fashion by varying the biaxiality of the model particles. A reversal in the preferred orientation of the bulk nematic relative to the isotropic-nematic interface suggests an oblique orientation of the bulk nematic. 32 refs., 8 figs

  11. Parameter optimization in the regularized kernel minimum noise fraction transformation

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack

    2012-01-01

    Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....

  12. On flame kernel formation and propagation in premixed gases

    Energy Technology Data Exchange (ETDEWEB)

    Eisazadeh-Far, Kian; Metghalchi, Hameed [Northeastern University, Mechanical and Industrial Engineering Department, Boston, MA 02115 (United States); Parsinejad, Farzan [Chevron Oronite Company LLC, Richmond, CA 94801 (United States); Keck, James C. [Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2010-12-15

    Flame kernel formation and propagation in premixed gases have been studied experimentally and theoretically. The experiments have been carried out at constant pressure and temperature in a constant volume vessel located in a high speed shadowgraph system. The formation and propagation of the hot plasma kernel has been simulated for inert gas mixtures using a thermodynamic model. The effects of various parameters including the discharge energy, radiation losses, initial temperature and initial volume of the plasma have been studied in detail. The experiments have been extended to flame kernel formation and propagation of methane/air mixtures. The effect of energy terms including spark energy, chemical energy and energy losses on flame kernel formation and propagation have been investigated. The inputs for this model are the initial conditions of the mixture and experimental data for flame radii. It is concluded that these are the most important parameters effecting plasma kernel growth. The results of laminar burning speeds have been compared with previously published results and are in good agreement. (author)

  13. Insights from Classifying Visual Concepts with Multiple Kernel Learning

    Science.gov (United States)

    Binder, Alexander; Nakajima, Shinichi; Kloft, Marius; Müller, Christina; Samek, Wojciech; Brefeld, Ulf; Müller, Klaus-Robert; Kawanabe, Motoaki

    2012-01-01

    Combining information from various image features has become a standard technique in concept recognition tasks. However, the optimal way of fusing the resulting kernel functions is usually unknown in practical applications. Multiple kernel learning (MKL) techniques allow to determine an optimal linear combination of such similarity matrices. Classical approaches to MKL promote sparse mixtures. Unfortunately, 1-norm regularized MKL variants are often observed to be outperformed by an unweighted sum kernel. The main contributions of this paper are the following: we apply a recently developed non-sparse MKL variant to state-of-the-art concept recognition tasks from the application domain of computer vision. We provide insights on benefits and limits of non-sparse MKL and compare it against its direct competitors, the sum-kernel SVM and sparse MKL. We report empirical results for the PASCAL VOC 2009 Classification and ImageCLEF2010 Photo Annotation challenge data sets. Data sets (kernel matrices) as well as further information are available at http://doc.ml.tu-berlin.de/image_mkl/(Accessed 2012 Jun 25). PMID:22936970

  14. Comparison of three-dimensional isotropic and conventional MR arthrography with respect to the diagnosis of rotator cuff and labral lesions: Focus on isotropic fat-suppressed proton density and VIBE sequences

    International Nuclear Information System (INIS)

    Park, S.Y.; Lee, I.S.; Park, S.K.; Cheon, S.J.; Ahn, J.M.; Song, J.W.

    2014-01-01

    Aim: To compare the diagnostic accuracies of three-dimensional (3D) isotropic magnetic resonance arthrography (MRA) using fat-suppressed proton density (PD) or volume interpolated breath-hold examination (VIBE) sequences with that of conventional MRA for the diagnosis of rotator cuff and labral lesions. Materials and methods: Eighty-six patients who underwent arthroscopic surgery were included. 3D isotropic sequences were performed in the axial plane using fat-suppressed PD (group A) in 53 patients and using VIBE (group B) in 33 patients. Reformatted images were obtained corresponding to conventional images, and evaluated for the presence of labral and rotator cuff lesions using conventional and 3D isotropic sequences. The diagnostic performances of each sequence were determined using arthroscopic findings as the standard. Results: Good to excellent interobserver agreements were obtained for both 3D isotropic sequences for the evaluation of rotator cuff and labral lesions. Excellent agreement was found between two-dimensional (2D) and 3D isotropic MRA, except for supraspinatus tendon (SST) tears by both readers and for subscapularis tendon (SCT) tears by reader 2 in group B. 2D MRA and 3D isotropic sequences had high diagnostic performances for rotator and labral tears, and the difference between the two imaging methods was insignificant. Conclusions: The diagnostic performances of 3D isotropic VIBE and PD sequences were similar to those of 2D MRA

  15. A method for manufacturing kernels of metallic oxides and the thus obtained kernels

    International Nuclear Information System (INIS)

    Lelievre Bernard; Feugier, Andre.

    1973-01-01

    A method is described for manufacturing fissile or fertile metal oxide kernels, consisting in adding at least a chemical compound capable of releasing ammonia to an aqueous solution of actinide nitrates dispersing the thus obtained solution dropwise in a hot organic phase so as to gelify the drops and transform them into solid particles, washing drying and treating said particles so as to transform them into oxide kernels. Such a method is characterized in that the organic phase used in the gel-forming reactions comprises a mixture of two organic liquids, one of which acts as a solvent, whereas the other is a product capable of extracting the metal-salt anions from the drops while the gel forming reaction is taking place. This can be applied to the so-called high temperature nuclear reactors [fr

  16. New Fukui, dual and hyper-dual kernels as bond reactivity descriptors.

    Science.gov (United States)

    Franco-Pérez, Marco; Polanco-Ramírez, Carlos-A; Ayers, Paul W; Gázquez, José L; Vela, Alberto

    2017-06-21

    We define three new linear response indices with promising applications for bond reactivity using the mathematical framework of τ-CRT (finite temperature chemical reactivity theory). The τ-Fukui kernel is defined as the ratio between the fluctuations of the average electron density at two different points in the space and the fluctuations in the average electron number and is designed to integrate to the finite-temperature definition of the electronic Fukui function. When this kernel is condensed, it can be interpreted as a site-reactivity descriptor of the boundary region between two atoms. The τ-dual kernel corresponds to the first order response of the Fukui kernel and is designed to integrate to the finite temperature definition of the dual descriptor; it indicates the ambiphilic reactivity of a specific bond and enriches the traditional dual descriptor by allowing one to distinguish between the electron-accepting and electron-donating processes. Finally, the τ-hyper dual kernel is defined as the second-order derivative of the Fukui kernel and is proposed as a measure of the strength of ambiphilic bonding interactions. Although these quantities have never been proposed, our results for the τ-Fukui kernel and for τ-dual kernel can be derived in zero-temperature formulation of the chemical reactivity theory with, among other things, the widely-used parabolic interpolation model.

  17. Isotropic Negative Thermal Expansion Metamaterials.

    Science.gov (United States)

    Wu, Lingling; Li, Bo; Zhou, Ji

    2016-07-13

    Negative thermal expansion materials are important and desirable in science and engineering applications. However, natural materials with isotropic negative thermal expansion are rare and usually unsatisfied in performance. Here, we propose a novel method to achieve two- and three-dimensional negative thermal expansion metamaterials via antichiral structures. The two-dimensional metamaterial is constructed with unit cells that combine bimaterial strips and antichiral structures, while the three-dimensional metamaterial is fabricated by a multimaterial 3D printing process. Both experimental and simulation results display isotropic negative thermal expansion property of the samples. The effective coefficient of negative thermal expansion of the proposed models is demonstrated to be dependent on the difference between the thermal expansion coefficient of the component materials, as well as on the circular node radius and the ligament length in the antichiral structures. The measured value of the linear negative thermal expansion coefficient of the three-dimensional sample is among the largest achieved in experiments to date. Our findings provide an easy and practical approach to obtaining materials with tunable negative thermal expansion on any scale.

  18. Quasi-Dual-Packed-Kerneled Au49 (2,4-DMBT)27 Nanoclusters and the Influence of Kernel Packing on the Electrochemical Gap.

    Science.gov (United States)

    Liao, Lingwen; Zhuang, Shengli; Wang, Pu; Xu, Yanan; Yan, Nan; Dong, Hongwei; Wang, Chengming; Zhao, Yan; Xia, Nan; Li, Jin; Deng, Haiteng; Pei, Yong; Tian, Shi-Kai; Wu, Zhikun

    2017-10-02

    Although face-centered cubic (fcc), body-centered cubic (bcc), hexagonal close-packed (hcp), and other structured gold nanoclusters have been reported, it was unclear whether gold nanoclusters with mix-packed (fcc and non-fcc) kernels exist, and the correlation between kernel packing and the properties of gold nanoclusters is unknown. A Au 49 (2,4-DMBT) 27 nanocluster with a shell electron count of 22 has now been been synthesized and structurally resolved by single-crystal X-ray crystallography, which revealed that Au 49 (2,4-DMBT) 27 contains a unique Au 34 kernel consisting of one quasi-fcc-structured Au 21 and one non-fcc-structured Au 13 unit (where 2,4-DMBTH=2,4-dimethylbenzenethiol). Further experiments revealed that the kernel packing greatly influences the electrochemical gap (EG) and the fcc structure has a larger EG than the investigated non-fcc structure. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. New criteria for isotropic and textured metals

    Science.gov (United States)

    Cazacu, Oana

    2018-05-01

    In this paper a isotropic criterion expressed in terms of both invariants of the stress deviator, J2 and J3 is proposed. This criterion involves a unique parameter, α, which depends only on the ratio between the yield stresses in uniaxial tension and pure shear. If this parameter is zero, the von Mises yield criterion is recovered; if a is positive the yield surface is interior to the von Mises yield surface whereas when a is negative, the new yield surface is exterior to it. Comparison with polycrystalline calculations using Taylor-Bishop-Hill model [1] for randomly oriented face-centered (FCC) polycrystalline metallic materials show that this new criterion captures well the numerical yield points. Furthermore, the criterion reproduces well yielding under combined tension-shear loadings for a variety of isotropic materials. An extension of this isotropic yield criterion such as to account for orthotropy in yielding is developed using the generalized invariants approach of Cazacu and Barlat [2]. This new orthotropic criterion is general and applicable to three-dimensional stress states. The procedure for the identification of the material parameters is outlined. Illustration of the predictive capabilities of the new orthotropic is demonstrated through comparison between the model predictions and data on aluminum sheet samples.

  20. Optimal kernel shape and bandwidth for atomistic support of continuum stress

    International Nuclear Information System (INIS)

    Ulz, Manfred H; Moran, Sean J

    2013-01-01

    The treatment of atomistic scale interactions via molecular dynamics simulations has recently found favour for multiscale modelling within engineering. The estimation of stress at a continuum point on the atomistic scale requires a pre-defined kernel function. This kernel function derives the stress at a continuum point by averaging the contribution from atoms within a region surrounding the continuum point. This averaging volume, and therefore the associated stress at a continuum point, is highly dependent on the bandwidth and shape of the kernel. In this paper we propose an effective and entirely data-driven strategy for simultaneously computing the optimal shape and bandwidth for the kernel. We thoroughly evaluate our proposed approach on copper using three classical elasticity problems. Our evaluation yields three key findings: firstly, our technique can provide a physically meaningful estimation of kernel bandwidth; secondly, we show that a uniform kernel is preferred, thereby justifying the default selection of this kernel shape in future work; and thirdly, we can reliably estimate both of these attributes in a data-driven manner, obtaining values that lead to an accurate estimation of the stress at a continuum point. (paper)

  1. Multivariable Christoffel-Darboux Kernels and Characteristic Polynomials of Random Hermitian Matrices

    Directory of Open Access Journals (Sweden)

    Hjalmar Rosengren

    2006-12-01

    Full Text Available We study multivariable Christoffel-Darboux kernels, which may be viewed as reproducing kernels for antisymmetric orthogonal polynomials, and also as correlation functions for products of characteristic polynomials of random Hermitian matrices. Using their interpretation as reproducing kernels, we obtain simple proofs of Pfaffian and determinant formulas, as well as Schur polynomial expansions, for such kernels. In subsequent work, these results are applied in combinatorics (enumeration of marked shifted tableaux and number theory (representation of integers as sums of squares.

  2. A multi-resolution approach to heat kernels on discrete surfaces

    KAUST Repository

    Vaxman, Amir

    2010-07-26

    Studying the behavior of the heat diffusion process on a manifold is emerging as an important tool for analyzing the geometry of the manifold. Unfortunately, the high complexity of the computation of the heat kernel - the key to the diffusion process - limits this type of analysis to 3D models of modest resolution. We show how to use the unique properties of the heat kernel of a discrete two dimensional manifold to overcome these limitations. Combining a multi-resolution approach with a novel approximation method for the heat kernel at short times results in an efficient and robust algorithm for computing the heat kernels of detailed models. We show experimentally that our method can achieve good approximations in a fraction of the time required by traditional algorithms. Finally, we demonstrate how these heat kernels can be used to improve a diffusion-based feature extraction algorithm. © 2010 ACM.

  3. Compactly Supported Basis Functions as Support Vector Kernels for Classification.

    Science.gov (United States)

    Wittek, Peter; Tan, Chew Lim

    2011-10-01

    Wavelet kernels have been introduced for both support vector regression and classification. Most of these wavelet kernels do not use the inner product of the embedding space, but use wavelets in a similar fashion to radial basis function kernels. Wavelet analysis is typically carried out on data with a temporal or spatial relation between consecutive data points. We argue that it is possible to order the features of a general data set so that consecutive features are statistically related to each other, thus enabling us to interpret the vector representation of an object as a series of equally or randomly spaced observations of a hypothetical continuous signal. By approximating the signal with compactly supported basis functions and employing the inner product of the embedding L2 space, we gain a new family of wavelet kernels. Empirical results show a clear advantage in favor of these kernels.

  4. Directional statistics-based reflectance model for isotropic bidirectional reflectance distribution functions.

    Science.gov (United States)

    Nishino, Ko; Lombardi, Stephen

    2011-01-01

    We introduce a novel parametric bidirectional reflectance distribution function (BRDF) model that can accurately encode a wide variety of real-world isotropic BRDFs with a small number of parameters. The key observation we make is that a BRDF may be viewed as a statistical distribution on a unit hemisphere. We derive a novel directional statistics distribution, which we refer to as the hemispherical exponential power distribution, and model real-world isotropic BRDFs as mixtures of it. We derive a canonical probabilistic method for estimating the parameters, including the number of components, of this novel directional statistics BRDF model. We show that the model captures the full spectrum of real-world isotropic BRDFs with high accuracy, but a small footprint. We also demonstrate the advantages of the novel BRDF model by showing its use for reflection component separation and for exploring the space of isotropic BRDFs.

  5. Genome-wide Association Analysis of Kernel Weight in Hard Winter Wheat

    Science.gov (United States)

    Wheat kernel weight is an important and heritable component of wheat grain yield and a key predictor of flour extraction. Genome-wide association analysis was conducted to identify genomic regions associated with kernel weight and kernel weight environmental response in 8 trials of 299 hard winter ...

  6. Neutron scattering study of the magnetism in a nanocrystalline/amorphous material

    International Nuclear Information System (INIS)

    Rosov, N.

    1995-01-01

    Recently developed nanocrystalline magnetic systems are of considerable interest fundamentally as well as technologically. One such material is Fe 73.5 B 9 Si 13.5 Cu 1 Nb 3 , which can be produced by heat treating the amorphous precursor. This forms a noncrystalline phase with typical dimension of 350 angstrom as determined by neutron diffraction. Small angle neutron scattering (SANS) has been employed to investigate the properties of the nanocrystallized material over the temperature range from 10 K to 725 K, a regime where no significant structural changes are expected to occur. In zero field and low temperature (10 K) the authors obtained an isotropic scattering pattern. The application of a relatively modest field to sweep out the domains changed the scattering to a butterfly wings pattern typical of patterns dominated by magnetic elastic intensity. Up to 450 K this pattern changed only modestly, while for substantially higher temperatures the ratio of inelastic to elastic scattering increased rapidly as the magnetic phase transition of the intergranular component (≅ 575 K) was approached. Triple axis inelastic measurements showed that the majority of the magnetic inelastic scattering was from the nanocrystalline phase

  7. Neutron scattering study of the magnetism in a nanocrystalline/amorphous material

    Energy Technology Data Exchange (ETDEWEB)

    Rosov, N. [National Inst. of Standards and Technology, Gaithersburg, MD (United States). Reactor Radiation Div.; Lynn, J.W. [National Inst. of Standards and Technology, Gaithersburg, MD (United States). Reactor Radiation Div.]|[Univ. of Maryland, College Park, MD (United States). Dept. of Physics; Fish, G.E. [Allied Signal Inc., Morristown, NJ (United States)

    1995-12-31

    Recently developed nanocrystalline magnetic systems are of considerable interest fundamentally as well as technologically. One such material is Fe{sub 73.5}B{sub 9}Si{sub 13.5}Cu{sub 1}Nb{sub 3}, which can be produced by heat treating the amorphous precursor. This forms a noncrystalline phase with typical dimension of 350 {angstrom} as determined by neutron diffraction. Small angle neutron scattering (SANS) has been employed to investigate the properties of the nanocrystallized material over the temperature range from 10 K to 725 K, a regime where no significant structural changes are expected to occur. In zero field and low temperature (10 K) the authors obtained an isotropic scattering pattern. The application of a relatively modest field to sweep out the domains changed the scattering to a butterfly wings pattern typical of patterns dominated by magnetic elastic intensity. Up to 450 K this pattern changed only modestly, while for substantially higher temperatures the ratio of inelastic to elastic scattering increased rapidly as the magnetic phase transition of the intergranular component ({approx_equal} 575 K) was approached. Triple axis inelastic measurements showed that the majority of the magnetic inelastic scattering was from the nanocrystalline phase.

  8. Ellipsoidal basis for isotropic oscillator

    International Nuclear Information System (INIS)

    Kallies, W.; Lukac, I.; Pogosyan, G.S.; Sisakyan, A.N.

    1994-01-01

    The solutions of the Schroedinger equation are derived for the isotropic oscillator potential in the ellipsoidal coordinate system. The explicit expression is obtained for the ellipsoidal integrals of motion through the components of the orbital moment and Demkov's tensor. The explicit form of the ellipsoidal basis is given for the lowest quantum numbers. 10 refs.; 1 tab. (author)

  9. A Heterogeneous Multi-core Architecture with a Hardware Kernel for Control Systems

    DEFF Research Database (Denmark)

    Li, Gang; Guan, Wei; Sierszecki, Krzysztof

    2012-01-01

    Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints. This pa......Rapid industrialisation has resulted in a demand for improved embedded control systems with features such as predictability, high processing performance and low power consumption. Software kernel implementation on a single processor is becoming more difficult to satisfy those constraints......). Second, a heterogeneous multi-core architecture is investigated, focusing on its performance in relation to hard real-time constraints and predictable behavior. Third, the hardware implementation of HARTEX is designated to support the heterogeneous multi-core architecture. This hardware kernel has...... several advantages over a similar kernel implemented in software: higher-speed processing capability, parallel computation, and separation between the kernel itself and the applications being run. A microbenchmark has been used to compare the hardware kernel with the software kernel, and compare...

  10. Finite-difference time-domain analysis on radar cross section of conducting cube scatterer covered with plasmas

    International Nuclear Information System (INIS)

    Liu Shaobin; Zhang Guangfu; Yuan Naichang

    2004-01-01

    A PLJERC-FDTD algorithm is applied to the study of the scattering of perfectly conducting cube covered with homogeneous isotropic plasmas. The effects of plasma thickness, density and collision frequency on the radar cross section (RCS) of the conducting cube scatterer have been obtained. The results illustrate that the plasma cloaking can greatly reduce the RCS of radar targets, and the RCS of the perfectly conducting cube scatterer decreases with increasing plasma thickness when the plasma frequency is greatly less than the electromagnetic (EM) wave frequency; the RCS of the perfectly conducting cube scatterer decreases with increasing plasma thickness and plasma collision frequency when the plasma frequency is almost half as much as the EM wave frequency; the effects of plasma thickness and collision frequency on the RCS of the perfectly conducting cube scatterer is small when the plasma frequency is close to the EM wave frequency

  11. Comparison between isotropic linear-elastic law and isotropic hyperelastic law in the finite element modeling of the brachial plexus.

    Science.gov (United States)

    Perruisseau-Carrier, A; Bahlouli, N; Bierry, G; Vernet, P; Facca, S; Liverneaux, P

    2017-12-01

    Augmented reality could help the identification of nerve structures in brachial plexus surgery. The goal of this study was to determine which law of mechanical behavior was more adapted by comparing the results of Hooke's isotropic linear elastic law to those of Ogden's isotropic hyperelastic law, applied to a biomechanical model of the brachial plexus. A model of finite elements was created using the ABAQUS ® from a 3D model of the brachial plexus acquired by segmentation and meshing of MRI images at 0°, 45° and 135° of shoulder abduction of a healthy subject. The offset between the reconstructed model and the deformed model was evaluated quantitatively by the Hausdorff distance and qualitatively by the identification of 3 anatomical landmarks. In every case the Hausdorff distance was shorter with Ogden's law compared to Hooke's law. On a qualitative aspect, the model deformed by Ogden's law followed the concavity of the reconstructed model whereas the model deformed by Hooke's law remained convex. In conclusion, the results of this study demonstrate that the behavior of Ogden's isotropic hyperelastic mechanical model was more adapted to the modeling of the deformations of the brachial plexus. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  12. Isotropic harmonic oscillator plus inverse quadratic potential in N-dimensional spaces

    International Nuclear Information System (INIS)

    Oyewumi, K.A.; Bangudu, E.A.

    2003-01-01

    Some aspects of the N-dimensional isotropic harmonic plus inverse quadratic potential were discussed. The hyperradial equation for isotropic harmonic oscillator plus inverse quadratic potential is solved by transformation into the confluent hypergeometric equation to obtain the normalized hyperradial solution. Together with the hyperangular solutions (hyperspherical harmonics), these form the complete energy eigenfunctions of the N-dimensional isotropic harmonic oscillator plus inverse quadratic potential and the energy eigenvalues are also obtained. These are dimensionally dependent. The dependence of radial solution on the dimensions or potential strength and the degeneracy of the energy levels are discussed. (author)

  13. On some one-speed neutron transport problems revisited and reformulated

    International Nuclear Information System (INIS)

    Williams, M.M.R.

    2001-01-01

    The solution of a number of one-speed neutron transport problems involving infinite media have been re-considered in the light of a transformation first used by Wallace (Wallace, P.R., 1944a. Boundary Conditions at Thin Absorbing Shells and Plates I. Canadian National Research Council Report MT-34; Wallace, P.R., 1944b. On the Thermal Utilisation of Plates in the Presence of Linear Anisotropic Scattering. Canadian National Research Council Report MT-63). The outcome of this transformation is that the infinite medium problem can be reduced to one in terms of an integral equation involving finite regions only. For example, in the case of an infinitely reflected slab, the infinite reflector is removed and its presence transferred to the kernel of a new integral equation. These kernels turn out to be the point or plane kernels of the corresponding infinite medium problem in the pure reflector material. In this paper the method is extended to slabs with arbitrary anisotropic scattering in slab and reflector; it is also applied to reflected spheres. In this case however, there is a limitation that the total mean free path in sphere and reflector be the same. Finally, we comment on the physical meaning of the standard anisotropic formalism and show that a more realistic eigenvalue exists which is directly related to the isotropic fission source. Some numerical results are given to illustrate our conclusions

  14. A particle swarm optimized kernel-based clustering method for crop mapping from multi-temporal polarimetric L-band SAR observations

    Science.gov (United States)

    Tamiminia, Haifa; Homayouni, Saeid; McNairn, Heather; Safari, Abdoreza

    2017-06-01

    Polarimetric Synthetic Aperture Radar (PolSAR) data, thanks to their specific characteristics such as high resolution, weather and daylight independence, have become a valuable source of information for environment monitoring and management. The discrimination capability of observations acquired by these sensors can be used for land cover classification and mapping. The aim of this paper is to propose an optimized kernel-based C-means clustering algorithm for agriculture crop mapping from multi-temporal PolSAR data. Firstly, several polarimetric features are extracted from preprocessed data. These features are linear polarization intensities, and several statistical and physical based decompositions such as Cloude-Pottier, Freeman-Durden and Yamaguchi techniques. Then, the kernelized version of hard and fuzzy C-means clustering algorithms are applied to these polarimetric features in order to identify crop types. The kernel function, unlike the conventional partitioning clustering algorithms, simplifies the non-spherical and non-linearly patterns of data structure, to be clustered easily. In addition, in order to enhance the results, Particle Swarm Optimization (PSO) algorithm is used to tune the kernel parameters, cluster centers and to optimize features selection. The efficiency of this method was evaluated by using multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Manitoba, Canada, during June and July in 2012. The results demonstrate more accurate crop maps using the proposed method when compared to the classical approaches, (e.g. 12% improvement in general). In addition, when the optimization technique is used, greater improvement is observed in crop classification, e.g. 5% in overall. Furthermore, a strong relationship between Freeman-Durden volume scattering component, which is related to canopy structure, and phenological growth stages is observed.

  15. Validity of the isotropic thermal conductivity assumption in supercell lattice dynamics

    Science.gov (United States)

    Ma, Ruiyuan; Lukes, Jennifer R.

    2018-02-01

    Superlattices and nano phononic crystals have attracted significant attention due to their low thermal conductivities and their potential application as thermoelectric materials. A widely used expression to calculate thermal conductivity, presented by Klemens and expressed in terms of the relaxation time by Callaway and Holland, originates from the Boltzmann transport equation. In its most general form, this expression involves a direct summation of the heat current contributions from individual phonons of all wavevectors and polarizations in the first Brillouin zone. In common practice, the expression is simplified by making an isotropic assumption that converts the summation over wavevector to an integral over wavevector magnitude. The isotropic expression has been applied to superlattices and phononic crystals, but its validity for different supercell sizes has not been studied. In this work, the isotropic and direct summation methods are used to calculate the thermal conductivities of bulk Si, and Si/Ge quantum dot superlattices. The results show that the differences between the two methods increase substantially with the supercell size. These differences arise because the vibrational modes neglected in the isotropic assumption provide an increasingly important contribution to the thermal conductivity for larger supercells. To avoid the significant errors that can result from the isotropic assumption, direct summation is recommended for thermal conductivity calculations in superstructures.

  16. Generalized synthetic kernel approximation for elastic moderation of fast neutrons

    International Nuclear Information System (INIS)

    Yamamoto, Koji; Sekiya, Tamotsu; Yamamura, Yasunori.

    1975-01-01

    A method of synthetic kernel approximation is examined in some detail with a view to simplifying the treatment of the elastic moderation of fast neutrons. A sequence of unified kernel (fsub(N)) is introduced, which is then divided into two subsequences (Wsub(n)) and (Gsub(n)) according to whether N is odd (Wsub(n)=fsub(2n-1), n=1,2, ...) or even (Gsub(n)=fsub(2n), n=0,1, ...). The W 1 and G 1 kernels correspond to the usual Wigner and GG kernels, respectively, and the Wsub(n) and Gsub(n) kernels for n>=2 represent generalizations thereof. It is shown that the Wsub(n) kernel solution with a relatively small n (>=2) is superior on the whole to the Gsub(n) kernel solution for the same index n, while both converge to the exact values with increasing n. To evaluate the collision density numerically and rapidly, a simple recurrence formula is derived. In the asymptotic region (except near resonances), this recurrence formula allows calculation with a relatively coarse mesh width whenever hsub(a)<=0.05 at least. For calculations in the transient lethargy region, a mesh width of order epsilon/10 is small enough to evaluate the approximate collision density psisub(N) with an accuracy comparable to that obtained analytically. It is shown that, with the present method, an order of approximation of about n=7 should yield a practically correct solution diviating not more than 1% in collision density. (auth.)

  17. Effect of Palm Kernel Cake Replacement and Enzyme ...

    African Journals Online (AJOL)

    A feeding trial which lasted for twelve weeks was conducted to study the performance of finisher pigs fed five different levels of palm kernel cake replacement for maize (0%, 40%, 40%, 60%, 60%) in a maize-palm kernel cake based ration with or without enzyme supplementation. It was a completely randomized design ...

  18. 3D geometrically isotropic metamaterial for telecom wavelengths

    DEFF Research Database (Denmark)

    Malureanu, Radu; Andryieuski, Andrei; Lavrinenko, Andrei

    2009-01-01

    of the unit cell is not infinitely small, certain geometrical constraints have to be fulfilled to obtain an isotropic response of the material [3]. These conditions and the metal behaviour close to the plasma frequency increase the design complexity. Our unit cell is composed of two main parts. The first part...... is obtained in a certain bandwidth. The proposed unit cell has the cubic point group of symmetry and being repeatedly placed in space can effectively reveal isotropic optical properties. We use the CST commercial software to characterise the “cube-in-cage” structure. Reflection and transmission spectra...

  19. Efficient Online Subspace Learning With an Indefinite Kernel for Visual Tracking and Recognition

    NARCIS (Netherlands)

    Liwicki, Stephan; Zafeiriou, Stefanos; Tzimiropoulos, Georgios; Pantic, Maja

    2012-01-01

    We propose an exact framework for online learning with a family of indefinite (not positive) kernels. As we study the case of nonpositive kernels, we first show how to extend kernel principal component analysis (KPCA) from a reproducing kernel Hilbert space to Krein space. We then formulate an

  20. Flour quality and kernel hardness connection in winter wheat

    Directory of Open Access Journals (Sweden)

    Szabó B. P.

    2016-12-01

    Full Text Available Kernel hardness is controlled by friabilin protein and it depends on the relation between protein matrix and starch granules. Friabilin is present in high concentration in soft grain varieties and in low concentration in hard grain varieties. The high gluten, hard wheat our generally contains about 12.0–13.0% crude protein under Mid-European conditions. The relationship between wheat protein content and kernel texture is usually positive and kernel texture influences the power consumption during milling. Hard-textured wheat grains require more grinding energy than soft-textured grains.

  1. Deep kernel learning method for SAR image target recognition

    Science.gov (United States)

    Chen, Xiuyuan; Peng, Xiyuan; Duan, Ran; Li, Junbao

    2017-10-01

    With the development of deep learning, research on image target recognition has made great progress in recent years. Remote sensing detection urgently requires target recognition for military, geographic, and other scientific research. This paper aims to solve the synthetic aperture radar image target recognition problem by combining deep and kernel learning. The model, which has a multilayer multiple kernel structure, is optimized layer by layer with the parameters of Support Vector Machine and a gradient descent algorithm. This new deep kernel learning method improves accuracy and achieves competitive recognition results compared with other learning methods.

  2. Influence of differently processed mango seed kernel meal on ...

    African Journals Online (AJOL)

    Influence of differently processed mango seed kernel meal on performance response of west African ... and TD( consisted spear grass and parboiled mango seed kernel meal with concentrate diet in a ratio of 35:30:35). ... HOW TO USE AJOL.

  3. A novel technique for one-dimensional scattering from Dirac Comb

    International Nuclear Information System (INIS)

    Taya, Sofyan A.; Shabat, M.M.

    2001-08-01

    Using the well-known matrix formulation of the reflection and transmission of electromagnetic waves by a stratified planner structure, we show that the reflection and transmission coefficients of any number of isotropic media can be written by a simple general formula. This formula uses the so-called elementary symmetric functions that are extensively used in the mathematical theory of polynomials. The approach is then applied to the quantum scattering. We show that the reflection and transmission coefficients of any number of quantum wells or barriers can be written in the similar way. Finally, one-dimensional scattering from a series of delta-function barriers (a system that is called Dirac Comb) is studied. The computed numerical illustrations compared with the earlier results based on the transfer matrix and Chebychev polynomials reveal an excellent agreement. (author)

  4. Ξ-P Scattering and STOPPED-Ξ-12C Reaction

    Science.gov (United States)

    Ahn, J. K.; Aoki, S.; Chung, K. S.; Chung, M. S.; En'yo, H.; Fukuda, T.; Funahashi, H.; Goto, Y.; Higashi, A.; Ieiri, M.; Iijima, T.; Iinuma, M.; Imai, K.; Itow, Y.; Lee, J. M.; Makino, S.; Masaike, A.; Matsuda, Y.; Matsuyama, Y.; Mihara, S.; Nagoshi, C.; Nomura, I.; Park, I. S.; Saito, N.; Sekimoto, M.; Shin, Y. M.; Sim, K. S.; Susukita, R.; Takashima, R.; Takeutchi, F.; Tlustý, P.; Weibe, S.; Yokkaichi, S.; Yoshida, K.; Yoshida, M.; Yoshida, T.; Yamashita, S.

    2000-09-01

    We report upper limits on the cross sections for the Ξ-p elastic and conversion processes based on the observation of one Ξ-p elastic scattering events with an invisible Λ decay. The cross section for the Ξ-p elastic scattering is, for simplicity, assumming an isotropic angular distribution, found to be 40 mb at 90% confidence level, whereas that for the Ξ-p → ΛΛ reaction is 11 mb at 90% confidence level. While the results on the elastic cross section give no stringent constraint on theoretical estimates, the upper limit on the conversion process suggests that the estimate of the RGM-F model prediction could be ruled out. We also report some preliminary results on the obervation of the stopped-Ξ- hyperon-nucleus interaction with respect to hypernuclear production and existence of doubly-strange H-dibaryon.

  5. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali

    2017-06-06

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar\\'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.

  6. A framework for dense triangular matrix kernels on various manycore architectures

    KAUST Repository

    Charara, Ali; Keyes, David E.; Ltaief, Hatem

    2017-01-01

    We present a new high-performance framework for dense triangular Basic Linear Algebra Subroutines (BLAS) kernels, ie, triangular matrix-matrix multiplication (TRMM) and triangular solve (TRSM), on various manycore architectures. This is an extension of a previous work on a single GPU by the same authors, presented at the EuroPar'16 conference, in which we demonstrated the effectiveness of recursive formulations in enhancing the performance of these kernels. In this paper, the performance of triangular BLAS kernels on a single GPU is further enhanced by implementing customized in-place CUDA kernels for TRMM and TRSM, which are called at the bottom of the recursion. In addition, a multi-GPU implementation of TRMM and TRSM is proposed and we show an almost linear performance scaling, as the number of GPUs increases. Finally, the algorithmic recursive formulation of these triangular BLAS kernels is in fact oblivious to the targeted hardware architecture. We, therefore, port these recursive kernels to homogeneous x86 hardware architectures by relying on the vendor optimized BLAS implementations. Results reported on various hardware architectures highlight a significant performance improvement against state-of-the-art implementations. These new kernels are freely available in the KAUST BLAS (KBLAS) open-source library at https://github.com/ecrc/kblas.

  7. PERI - auto-tuning memory-intensive kernels for multicore

    International Nuclear Information System (INIS)

    Williams, S; Carter, J; Oliker, L; Shalf, J; Yelick, K; Bailey, D; Datta, K

    2008-01-01

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to sparse matrix vector multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the high-performance computing literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4x improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications

  8. PERI - Auto-tuning Memory Intensive Kernels for Multicore

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H; Williams, Samuel; Datta, Kaushik; Carter, Jonathan; Oliker, Leonid; Shalf, John; Yelick, Katherine; Bailey, David H

    2008-06-24

    We present an auto-tuning approach to optimize application performance on emerging multicore architectures. The methodology extends the idea of search-based performance optimizations, popular in linear algebra and FFT libraries, to application-specific computational kernels. Our work applies this strategy to Sparse Matrix Vector Multiplication (SpMV), the explicit heat equation PDE on a regular grid (Stencil), and a lattice Boltzmann application (LBMHD). We explore one of the broadest sets of multicore architectures in the HPC literature, including the Intel Xeon Clovertown, AMD Opteron Barcelona, Sun Victoria Falls, and the Sony-Toshiba-IBM (STI) Cell. Rather than hand-tuning each kernel for each system, we develop a code generator for each kernel that allows us to identify a highly optimized version for each platform, while amortizing the human programming effort. Results show that our auto-tuned kernel applications often achieve a better than 4X improvement compared with the original code. Additionally, we analyze a Roofline performance model for each platform to reveal hardware bottlenecks and software challenges for future multicore systems and applications.

  9. Kernel Bayesian ART and ARTMAP.

    Science.gov (United States)

    Masuyama, Naoki; Loo, Chu Kiong; Dawood, Farhan

    2018-02-01

    Adaptive Resonance Theory (ART) is one of the successful approaches to resolving "the plasticity-stability dilemma" in neural networks, and its supervised learning model called ARTMAP is a powerful tool for classification. Among several improvements, such as Fuzzy or Gaussian based models, the state of art model is Bayesian based one, while solving the drawbacks of others. However, it is known that the Bayesian approach for the high dimensional and a large number of data requires high computational cost, and the covariance matrix in likelihood becomes unstable. This paper introduces Kernel Bayesian ART (KBA) and ARTMAP (KBAM) by integrating Kernel Bayes' Rule (KBR) and Correntropy Induced Metric (CIM) to Bayesian ART (BA) and ARTMAP (BAM), respectively, while maintaining the properties of BA and BAM. The kernel frameworks in KBA and KBAM are able to avoid the curse of dimensionality. In addition, the covariance-free Bayesian computation by KBR provides the efficient and stable computational capability to KBA and KBAM. Furthermore, Correntropy-based similarity measurement allows improving the noise reduction ability even in the high dimensional space. The simulation experiments show that KBA performs an outstanding self-organizing capability than BA, and KBAM provides the superior classification ability than BAM, respectively. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Low-Resolution Tactile Image Recognition for Automated Robotic Assembly Using Kernel PCA-Based Feature Fusion and Multiple Kernel Learning-Based Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Yi-Hung Liu

    2014-01-01

    Full Text Available In this paper, we propose a robust tactile sensing image recognition scheme for automatic robotic assembly. First, an image reprocessing procedure is designed to enhance the contrast of the tactile image. In the second layer, geometric features and Fourier descriptors are extracted from the image. Then, kernel principal component analysis (kernel PCA is applied to transform the features into ones with better discriminating ability, which is the kernel PCA-based feature fusion. The transformed features are fed into the third layer for classification. In this paper, we design a classifier by combining the multiple kernel learning (MKL algorithm and support vector machine (SVM. We also design and implement a tactile sensing array consisting of 10-by-10 sensing elements. Experimental results, carried out on real tactile images acquired by the designed tactile sensing array, show that the kernel PCA-based feature fusion can significantly improve the discriminating performance of the geometric features and Fourier descriptors. Also, the designed MKL-SVM outperforms the regular SVM in terms of recognition accuracy. The proposed recognition scheme is able to achieve a high recognition rate of over 85% for the classification of 12 commonly used metal parts in industrial applications.

  11. Design and construction of palm kernel cracking and separation ...

    African Journals Online (AJOL)

    Design and construction of palm kernel cracking and separation machines. ... Username, Password, Remember me, or Register. DOWNLOAD FULL TEXT Open Access DOWNLOAD FULL TEXT Subscription or Fee Access. Design and construction of palm kernel cracking and separation machines. JO Nordiana, K ...

  12. Variable kernel density estimation in high-dimensional feature spaces

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2017-02-01

    Full Text Available Estimating the joint probability density function of a dataset is a central task in many machine learning applications. In this work we address the fundamental problem of kernel bandwidth estimation for variable kernel density estimation in high...

  13. Heat Kernel Asymptotics of Zaremba Boundary Value Problem

    Energy Technology Data Exchange (ETDEWEB)

    Avramidi, Ivan G. [Department of Mathematics, New Mexico Institute of Mining and Technology (United States)], E-mail: iavramid@nmt.edu

    2004-03-15

    The Zaremba boundary-value problem is a boundary value problem for Laplace-type second-order partial differential operators acting on smooth sections of a vector bundle over a smooth compact Riemannian manifold with smooth boundary but with discontinuous boundary conditions, which include Dirichlet boundary conditions on one part of the boundary and Neumann boundary conditions on another part of the boundary. We study the heat kernel asymptotics of Zaremba boundary value problem. The construction of the asymptotic solution of the heat equation is described in detail and the heat kernel is computed explicitly in the leading approximation. Some of the first nontrivial coefficients of the heat kernel asymptotic expansion are computed explicitly.

  14. An Ensemble Approach to Building Mercer Kernels with Prior Information

    Science.gov (United States)

    Srivastava, Ashok N.; Schumann, Johann; Fischer, Bernd

    2005-01-01

    This paper presents a new methodology for automatic knowledge driven data mining based on the theory of Mercer Kernels, which are highly nonlinear symmetric positive definite mappings from the original image space to a very high, possibly dimensional feature space. we describe a new method called Mixture Density Mercer Kernels to learn kernel function directly from data, rather than using pre-defined kernels. These data adaptive kernels can encode prior knowledge in the kernel using a Bayesian formulation, thus allowing for physical information to be encoded in the model. Specifically, we demonstrate the use of the algorithm in situations with extremely small samples of data. We compare the results with existing algorithms on data from the Sloan Digital Sky Survey (SDSS) and demonstrate the method's superior performance against standard methods. The code for these experiments has been generated with the AUTOBAYES tool, which automatically generates efficient and documented C/C++ code from abstract statistical model specifications. The core of the system is a schema library which contains templates for learning and knowledge discovery algorithms like different versions of EM, or numeric optimization methods like conjugate gradient methods. The template instantiation is supported by symbolic-algebraic computations, which allows AUTOBAYES to find closed-form solutions and, where possible, to integrate them into the code.

  15. Exploration of Shorea robusta (Sal seeds, kernels and its oil

    Directory of Open Access Journals (Sweden)

    Shashi Kumar C.

    2016-12-01

    Full Text Available Physical, mechanical, and chemical properties of Shorea robusta seed with wing, seed without wing, and kernel were investigated in the present work. The physico-chemical composition of sal oil was also analyzed. The physico-mechanical properties and proximate composition of seed with wing, seed without wing, and kernel at three moisture contents of 9.50% (w.b, 9.54% (w.b, and 12.14% (w.b, respectively, were studied. The results show that the moisture content of the kernel was highest as compared to seed with wing and seed without wing. The sphericity of the kernel was closer to that of a sphere as compared to seed with wing and seed without wing. The hardness of the seed with wing (32.32, N/mm and seed without wing (42.49, N/mm was lower than the kernels (72.14, N/mm. The proximate composition such as moisture, protein, carbohydrates, oil, crude fiber, and ash content were also determined. The kernel (30.20%, w/w contains higher oil percentage as compared to seed with wing and seed without wing. The scientific data from this work are important for designing of equipment and processes for post-harvest value addition of sal seeds.

  16. A survey of kernel-type estimators for copula and their applications

    Science.gov (United States)

    Sumarjaya, I. W.

    2017-10-01

    Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.

  17. Irradiation performance of coated fuel particles with fission product retaining kernel additives

    International Nuclear Information System (INIS)

    Foerthmann, R.

    1979-10-01

    The four irradiation experiments FRJ2-P17, FRJ2-P18, FRJ2-P19, and FRJ2-P20 for testing the efficiency of fission product-retaining kernel additives in coated fuel particles are described. The evaluation of the obtained experimental data led to the following results: - zirconia and alumina kernel additives are not suitable for an effective fission product retention in oxide fuel kernels, - alumina-silica kernel additives reduce the in-pile release of Sr 90 and Ba 140 from BISO-coated particles at temperatures of about 1200 0 C by two orders of magnitude, and the Cs release from kernels by one order of magnitude, - effective transport coefficients including all parameters which contribute to kernel release are given for (Th,U)O 2 mixed oxide kernels and low enriched UO 2 kernels containing 5 wt.% alumina-silica additives: 10g sub(K)/cm 2 s -1 = - 36 028/T + 6,261 (Sr 90), 10g Dsub(K)/cm 2 c -2 = - 29 646/T + 5,826 (Cs 134/137), alumina-silica kernel additives are ineffective for retaining Ag 110 m in coated particles. However, also an intact SiC-interlayer was found not to be effective at temperatures above 1200 0 C, - the penetration of the buffer layer by fission product containing eutectic additive melt during irradiation can be avoided by using additives which consist of alumina and mullite without an excess of silica, - annealing of LASER-failed irradiated particles and the irradiation test FRJ12-P20 indicate that the efficiency of alumina-silica kernel additives is not altered if the coating becomes defect. (orig.) [de

  18. Nonlinear Boltzmann equation for the homogeneous isotropic case: Minimal deterministic Matlab program

    Science.gov (United States)

    Asinari, Pietro

    2010-10-01

    The homogeneous isotropic Boltzmann equation (HIBE) is a fundamental dynamic model for many applications in thermodynamics, econophysics and sociodynamics. Despite recent hardware improvements, the solution of the Boltzmann equation remains extremely challenging from the computational point of view, in particular by deterministic methods (free of stochastic noise). This work aims to improve a deterministic direct method recently proposed [V.V. Aristov, Kluwer Academic Publishers, 2001] for solving the HIBE with a generic collisional kernel and, in particular, for taking care of the late dynamics of the relaxation towards the equilibrium. Essentially (a) the original problem is reformulated in terms of particle kinetic energy (exact particle number and energy conservation during microscopic collisions) and (b) the computation of the relaxation rates is improved by the DVM-like correction, where DVM stands for Discrete Velocity Model (ensuring that the macroscopic conservation laws are exactly satisfied). Both these corrections make possible to derive very accurate reference solutions for this test case. Moreover this work aims to distribute an open-source program (called HOMISBOLTZ), which can be redistributed and/or modified for dealing with different applications, under the terms of the GNU General Public License. The program has been purposely designed in order to be minimal, not only with regards to the reduced number of lines (less than 1000), but also with regards to the coding style (as simple as possible). Program summaryProgram title: HOMISBOLTZ Catalogue identifier: AEGN_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGN_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License No. of lines in distributed program, including test data, etc.: 23 340 No. of bytes in distributed program, including test data, etc.: 7 635 236 Distribution format: tar

  19. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies

    Science.gov (United States)

    Manitz, Juliane; Burger, Patricia; Amos, Christopher I.; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility. PMID:28785300

  20. Pathway-Based Kernel Boosting for the Analysis of Genome-Wide Association Studies.

    Science.gov (United States)

    Friedrichs, Stefanie; Manitz, Juliane; Burger, Patricia; Amos, Christopher I; Risch, Angela; Chang-Claude, Jenny; Wichmann, Heinz-Erich; Kneib, Thomas; Bickeböller, Heike; Hofner, Benjamin

    2017-01-01

    The analysis of genome-wide association studies (GWAS) benefits from the investigation of biologically meaningful gene sets, such as gene-interaction networks (pathways). We propose an extension to a successful kernel-based pathway analysis approach by integrating kernel functions into a powerful algorithmic framework for variable selection, to enable investigation of multiple pathways simultaneously. We employ genetic similarity kernels from the logistic kernel machine test (LKMT) as base-learners in a boosting algorithm. A model to explain case-control status is created iteratively by selecting pathways that improve its prediction ability. We evaluated our method in simulation studies adopting 50 pathways for different sample sizes and genetic effect strengths. Additionally, we included an exemplary application of kernel boosting to a rheumatoid arthritis and a lung cancer dataset. Simulations indicate that kernel boosting outperforms the LKMT in certain genetic scenarios. Applications to GWAS data on rheumatoid arthritis and lung cancer resulted in sparse models which were based on pathways interpretable in a clinical sense. Kernel boosting is highly flexible in terms of considered variables and overcomes the problem of multiple testing. Additionally, it enables the prediction of clinical outcomes. Thus, kernel boosting constitutes a new, powerful tool in the analysis of GWAS data and towards the understanding of biological processes involved in disease susceptibility.